STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE
Aim. For complex highly-integrated technical systems that contain elements that vary in their physical nature and operating principles (combination of mechanical, electrical and programmable electronic components), complex dependability analysis appears to be challenging due to both qualitative and quantitative reasons (large number of elements and performed functions, poorly defined boundaries of interfunctional interaction, presence of hidden redundancy, static and dynamic reconfiguration, etc.). The high degree of integration of various subsystems erodes the boundaries of responsibility in the cause-and-effect link of failures. Thus, the definition of the strength and boundaries of interfunctional and cross-system interaction is of great value in the context of complex system analysis from the standpoint of locating bottlenecks, as well as reliable evaluation of the complex dependability level.
Methods. In order to solve the tasks at hand, the authors propose a method that is based on the research of the behavior of the centroid of an area bounded above by the failure density function graph, below by the coordinate axis, from the right and left by the boundaries of the considered operation interval. Graphical analysis with construction of centroids is performed for each subsystem or structural unit of a complex technical system. After that, based on the partial centroids of the respective subsystems/units, the average centroid for the whole complex system is constructed. The authors suggest using the average centroid as a conditional universal measure of the average dependability level of highly-integrated technical systems that can be used in the development of specific design solutions. In this case, in particular, it is suggested to use the presented method for identification of the subsystem that, when redundant, ensures the highest all-around growth of dependability of the complex technical system as a whole. This condition is fulfilled by the subsystem/unit of which the partial centroid is situated at the longest distance from the average centroid. The assumptions presented in this article and the results obtained are tested by means of a short verification consisting in the calculation of the probability of no-failure of the system and subsystems, construction and analysis of respective graphs.
Results. The method’s implementation is presented using the example of a conventional mechatronic system. For the sake of briefness and focus the information is given in a simplified and abstract form. The application of the proposed method for analyzing complex technical systems dependability through the research of density function centroid introduced in this article was the target criterion of the method’s development, i.e. identification of bottlenecks and areas with the highest potential for increasing the overall dependability. Further publications will be dedicated to proving the applicability of such entity as a centroid as a dependability evaluation criterion, as well as other applications of the presented method in complex technical systems dependability analysis.
One of the strategic areas of development of all oil refineries (OR) in the Russian Federation is the improvement of equipment dependability and safety. The regulatory framework often does not take into consideration the design features of devices which, on the one hand, standardizes the service conditions, but, on the other hand, may cause inefficient maintenance of individual types of equipment. Due to the unpreparedness of the Russian oil refining complex to migration from scheduled preventive maintenance to condition-based maintenance, a large number of obsolete equipment and unceasing growth of technology complexity in modern ORs, it is required to improve and update the statistical and analytical base of dependability indicators of the equipment in operation. Russian and foreign experience of OR operation shows that damaged OR pump equipment can cause significant material damage and human casualties. A fair share of faults and failures that can cause accidents in ORs is concentrated in pump and compressor facilities. Ensuring safety of equipment operation and ORs as a whole requires reducing the probability of accidents. To that effect, technical condition monitoring facilities are deployed and equipment diagnostics are performed. A priori information analysis is also an option.
Results. The article presents the results of documental inspection of performed maintenance of NK, NKV and NPS type pumps of a Russian OR conducted for the purpose of improving dependability and safety of pump operation. Probabilistic and statistical methods were used. The article presents an analysis of dependability indicators based on Gomertz-Makeham parametric distribution. This distribution is widely used in survival analysis and characterizes both system deterioration, and the influence of factors that do not depend on operation time. The authors analyze maintenance operations and repair cycles, identify the least dependable pump components and most frequent repair operations, show the influence of total operation time on pump dependability indicators. For the inspected pumps, the availability factors, utilization factors and average time between maintenance have been defined. The analysis identified that the availability factor of pumps depends not only on the average time between maintenance (that in turn depends on the frequency of required maintenance), but also on the utilization factor of the pumps. Beside the conventional dependability indicators, i.e probability of no-failure and failure rate, based on pump failure rate analysis ultimate times to failure for the inspected pumps were identified. Ultimate time to failure is the operation time beyond which gradual deterioration process significantly accelerates and causes a growth of the number and/or quality of partial failures. A significant accumulation of partial failures results in the loss of function or destruction of equipment. This dependability indicator is the most important in insuring the normal operation from the point of view of operating services.
Conclusions. The article shows taht the identification of the ultimate time to failure for improved dependability and reliability of equipment operation must involve regular updates of input data in order to identify the beginning of the process of equipment “aging” for prevention of accidents caused by out-of-limit tear and wear of equipment.
Aim. One of the stages of dependability analysis of technical systems is the a priori analysis that is usually performed at early design stages. This analysis a priori has known quantitative dependability characteristics of all used system elements. As unique, non-mass produced or new elements usually lack reliable a priori information on quantitative dependability characteristics, those are specified based on the characteristics of technical elements already in use. A priori information means information retrieved as the result of dependability calculation and simulation, various dependability tests, operation of facilities similar in design to the tested ones (prototypes). From system perspective, any research of technical object dependability must be planned and performed subject to the results of previous research, i.e. the a priori information. Thus, the a priori analysis is based on a priori (probabilistic) dependability characteristics that only approximately reflect the actual processes occurring in the technical system. Nevertheless, at the design stage, this analysis allows identifying system element connections that are poor from dependability point of view, taking appropriate measures to eliminate them, as well as rejecting unsatisfactory structural patterns of technical systems. That is why a priori dependability analysis (or calculation) is of significant importance in the practice of technical system design and is an integral part of engineering projects. This paper looks into primary [1] continuous distributions of random values (exponential, Weibull-Gnedenko, gamma, log normal and normal) used as theoretical distributions of dependability indicators. In order to obtain a priori information on the dependability of technical systems and elements under development, the authors present dependences that allow evaluating primary dependability indicators, as well as show approaches to their application in various conditions. Methods. Currently, in Russia there is no single system for collection and processing of information on the dependability of diverse technical systems [3] which is one of the reasons of low dependability. In the absence of such information, designing new systems with specified dependability indicators is associated with significant challenges. That is why the information presented in this article is based upon the collection and systematization of information published in Russian sources, analysis of the results of simulation and experimental studies of dependability of various technical systems and elements, as well as statistical materials collected in operation.
Results. The article presents an analysis of practical application of principal continuous laws of random distribution in the theory of technical systems dependability that allows hypothesizing the possible shape of system elements failure models at early design stages for subsequent evaluation of their dependability indicators.
Conclusions. The article may be useful to researchers at early stages of design of various technical systems as a priori information for construction of models and criteria used for dependability assurance and monitoring, as well as improvement of accuracy and reliability of derived estimates in the process of highly reliable equipment (systems) development.
FUNCTIONAL SAFETY. THE THEORY AND PRACTICE
Aim. The paper considers the problem of estimating the probability of fire occurrence on diesel locomotives of various types and the ways to solve it. The problem arises due to JSC RZD locomotive fleet special aspects. Thus, the operating fleet presents diesel locomotives designed and constructed in the 20th century as well as in 21st century, and this accounts for different causes of fire owing to design differences. The biggest contribution to differences in fire numbers on new and old type locomotives is made by the construction of a diesel engine as well as the fire resistance of cables. Researches show that substantial difference in fire statistics for diesel locomotives of various types for the same period of observation are caused by the volume of operating diesel locomotive fleet. For instance, volumes of operating fleet for some types of diesel locomotives amount to thousand units (loco-days), while for other types they make up just a couple of hundreds. This raises questions about whether a period of observation and a volume of operating fleet are enough for estimating the probability and what methods should be used to estimate it. Furthermore, we need an interval estimation of probability which is caused by reliability considerations, by getting “the worst scenario”. Again, this is influenced by above differences in types of diesel locomotives. The paper also analyzes the necessity of estimating “the worst scenario” and problems arising in reference with its calculation. To solve the problem of enhancing the reliability of calculations is to calculate the upper boundaries of probabilities. In this case some types of diesel locomotives will have a lower boundary of probability rather than “the worst scenario” as interval estimation. The necessity of such estimation is specified for diesel locomotives of specific designs with materials complying with modern standards in terms of reliability and fire resistance or having scarce statistics for applying approximation methods of calculation because of limited operating fleet.
Methods. Researches into statistics of fires on diesel locomotives of types 2TE10, 3TE10, 2TE116, 2M62, TEP70, ChMEZ, TEM2 made by the authors began with application of a “classic” statistics tool, i.e. check of statistical hypotheses about a law of distribution of a random value “fire” belonging to known discrete laws. While at this, a minimum amount of tests was defined for making sure that targeted estimates of probability are of certain reliability. The condition of a diesel locomotive during operation is not stationary, so a classic estimation of probability of fire occurrence would lead to uncertainty in applying the results of estimation for the purposes of planning and prediction. To evaluate “the worst scenario”, we used both precise and approximate methods for defining confidence boundaries based on “double approximation”. Further, to enable transition from estimation of probability of fire occurrence on diesel locomotives of a certain type to estimation of fire probability for certain units, a sufficient amount of rolling stock was researched. The authors have found that the amount of operating fleet should be not less than 610 loco-days to ensure the precision of probability calculation with an error not exceeding ε. The authors have also identified the method and the necessity of separately estimating the probability of fire on locomotives with operating fleet less than 610 loco-days.
Results. Conclusions. In fine, for each type of locomotive we have defined a law of random value occurrence, calculated interval estimates of probability of fire occurrence considering an amount of operating fleet. Tools of statistical analysis for calculating probabilities of fire occurrence on diesel locomotives of various types have been also identified. We have determined methods for calculating interval probability estimates taking into account an available amount of observations with an error not exceeding a specified value ε at the level of 0,2p* j . This research and related calculations have enabled us to obtain one of the primary elements for estimating a fire risk, i.e. the probability of fire on diesel locomotives of various types.
FUNCTIONAL RELIABILITY. THE THEORY AND PRACTICE
ACCOUNTS
Product testing plan of type has been chosen as the subject of research plan. This plan's time between failures is subject to the exponential law where N is the number of same-type tested products; T is the time to failure (same for each product); is a feature of the plan meaning that after each failure the working condition of the product is recovered over the course of the test. In this case, the time to failure is defined according to formula T01 = NT/ , where is the number of observed failures, ω > 0, that occurred within the time T. This estimate is biased. Besides that, if it is required to solve a problem that involves achieving a point estimation of mean time to failure (T0) of products based on tests that did not produce any failures, estimate T01 cannot be used. If over the time of testing the number of observed failures is small (the number does not exceed several ones), the estimate can contain a significant error due to the bias. In order to solve the above problem, it suffices to find an unbiased efficient estimate T0ef of the value T0, if such exists, in the class of consistent biased estimates (the class of consistent estimates that includes all estimates generated by method of substitution, of which the maximum likelihood method, contains estimates with any bias, including those with a fixed one, in the form of function of parameter or constant). In general, there is currently no rule for finding unbiased estimates, and their identification is a sort of art. In some cases, the generated unbiased efficient estimates are quite lengthy and have a complex calculation algorithm. They are also not always sufficiently efficient in the class of all biased estimates and not always have a considerable advantage over simple yet biased estimates from the point of view of proximity to the estimated value.
The aim of the article is to find the estimate of value T0 that is simple and more efficient in comparison with the conventional one and negligibly inferior to the estimate T0ef, if such exists, in terms of proximity to T0 when using the NMT plan.
Methods. In obtaining an efficient estimate integral characteristics were used, i.e. total relative square of the deviation of expected realization of estimate T0ω from various values T0 per various failure flows of the tested product population. A sufficiently wide range of class estimates was considered and a functional built based on the integral characteristic, of which the solution finally allowed deducing a simple and efficient evaluation of mean time to failure for the NMT plan.
Conclusions. The achieved estimate of mean time to failure for the NMT plan is efficient within a sufficiently wide range of estimates and is not improvable within the considered class of estimates. Additionally, the achieved estimate enables point estimation of mean time to failure based on the results of tests that did not have any failures.