Preview

Dependability

Advanced search
Vol 17, No 2 (2017)
View or download the full issue PDF (Russian) | PDF
https://doi.org/10.21683/1729-2646-2017-17-2

STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE

4-10 1151
Abstract

Aim. The article examines the behaviour of renewable objects that are complex systems and generate temporally unhomogeneous failure flows. The objects’ dependability is described with a geometrical processes model. The mathematical model of such processes allows considering both the ageing and renewal of a system. In the first case the failure flow rate increases with time. That corresponds with the period of ageing, when the failure rate progressively grows and the system fails more and more frequently. In the second case, the failures that show high rate at the beginning of operation become rare with time. In technical literature, this stage of operation is called the burn-in period. Normal renewal process is a special case of the geometric process model. In real operation conditions not all operation times end with a failure. Situations arise when as part of preventive maintenance a shortcoming is identified in an observed object, that gets replaced as the result. Or, for a number of reasons, a procedure is required, for which the object is removed from service and also replaced with an identical one. The object that was removed from service is repaired, modernized or simply stored. Another situation of unfinished operation occurs when the observation of an object is interrupted. More precisely, the object continues operating at the time the observation stops. For example, it may be known that at the current time the object is in operation. Both of the described situations classify the operation time as right censored. The task is to estimate the parameters of the mathematical model of geometric process using the known complete and right censored operation times that are presumably governed by the geometric process model. For complete operation times, this task was solved for various distributions [11-16]. As it is known, taking into consideration censored data increases the estimation quality. In this paper the estimation task is solved subject to the use of complete and right censored data. Additionally, the article aims to provide an analytical justification of increased estimation quality in cases when censoring is taken into account, as well as a practical verification of the developed method with real data.

Methods. The maximum likelihood method is used for evaluation of the parameters of the geometrical process model. The likelihood function takes into consideration right censored data. The resulting system of equations is solved by means of the Newton-Raphson method.

Conclusions. The article introduces formulas for evaluation of model parameters according to the maximum likelihood method on the assumption of various distribution laws of the time to first failure. The resulting formulas enable the estimation of the parameters of the geometrical process model involving uncertainty in the form of right censoring. Analytical evidence is produced on increased accuracy of estimation in cases when right censored data is taken into consideration. Parameter estimation was performed based on real operational data of an element of the Bilibino NPP protection control system. 

11-16 1370
Abstract
State-of-the-art digital circuit design widely uses field programmable gate arrays (FPGAs), in which the functions of logic cells and their connections are set up. That is defined in the configuration file that is loaded in the configuration memory cells (static random access memory) of FPGA from external memory. The logic itself is implemented in the so-called LUTs (Look Up Tables), multiplexors that implement memory cells, are based on transmitting transistors and represents a tree that is activated by a specific variable collection. The setting is multiplexor data, therefore logical (switching) function values for the specific collection are transmitted to the tree output. As it turns out, the associated LUT setting code can be decoded and used for analyzing synthesis results in Quartus II by Altera that has been acquired by Intel. Now Intel also specializes in FPGA production. The article considers an example of the synthesis of a simple combinational finite state machine that implements the so-called majority function (2 out of 3). This function equals 1 if the majority of variables equals 1. Majority function implementation diagram is synthesized in Quartus II that builds a special BDF (Block Diagram/Schematic File) file. The resulting diagram is examined with Map Viewer. In the appropriate diagram, LUT (Logic Cell Comb) setting codes for implementation of the specified function are set forth in the form of four-digit hexacodes. Decoding is shown for setting codes for logic cells of FPGA LUT type that describe the content of the respective truth tables of functions that depend on the input variable machine. The article shows the code changes in the process of diagram optimization by Quartus II with possible modification of the variables sequence order and correspondence with the inputs of a four-input LUT without modifications to the logical function. If Stratix IIGX FPGA is used that has the so-called adaptive logic modules (ALM) with 6 inputs, Quartus II uses 64-bit codes (eight-digit hexacodes). Respective coding is also examined in this paper.
17-23 1253
Abstract

Aim. The maintenance of Russia’s railway network requires significant expenditures in order to support the dependability of infrastructure facilities operation. When resources are limited, a wrong decision can cause errors in maintenance planning. The activities of track enterprises define normal operation of the railway infrastructure as a system. Rational management of infrastructure facilities requires the availability of objective real-time information on their dependability and functional safety. One of the key indicators that characterizes the dependability of track is the availability coefficient. When evaluating partially available facilities, it must be considered how partial non-fulfilment or reduced quality of its functions impacts the availability. The conventional formula for the technical availability coefficient allows for only two possible facility states: operable and non-operable. Such evaluation of the technical availability coefficient does not, for instance, allow for reduced availability as a result of a speed restriction on a line section, as well as the impact of a failure of a line section on the overall availability of the line. Therefore, this article deals with the method of evaluation of the technical availability coefficient of a line section subject to its partial operability, as well as considers the approach to the standardization of the technical availability coefficient of a line section.

Methods. The evaluation of the technical availability coefficient of a line section subject to its partial operability involved a system analysis of factors that reduce track capacity. Among such factors are speed restrictions and interruption of traffic due to scheduled and non-scheduled maintenance operations. A three-dimensional graphic model of dependency of movement speed from linear coordinates and time is suggested. It was used to deduce the formulas for evaluation of the technical availability coefficient of single and n-track lines. An approach to the standardization of individual components of the technical availability coefficient was considered. Correlations were deduced for calculation of standard value of the technical availability coefficient of single and n-track lines.

Conclusions. Upon an examination of the factors that cause partial operability and non-operability of railway track, the authors offer a method for evaluation of the technical availability coefficient of a railway line subject to the effects of speed restrictions on the capacity and thereby the availability of track. Aspects of standardization of the technical availability coefficient were examined. Formulas were obtained that allow calculating the actual availability coefficient of a railway line subject to partial operability, as well as the standard value of this indicator. The approaches and methods that are considered in this paper aim to improve the objectivity of evaluation of track availability to enable well-founded decisionmaking in operation. 

24-30 1245
Abstract

Aim. The development of the electronics industry is associated with a fast growth of the products functionality, which in turn causes increasing structural complexity of the radioelectronic systems (RES) with simultaneously more pressing dependability requirements. The currently used methods have several shortcomings, the most important of which is that they allow accurately evaluating reliability indicators only in individual cases. This type of estimation can be used for verification of compliance with specifications, but it does not enable RES dependability analysis after the manufacture of the pilot batch of equipment. That is why the task of identification of dependability indicators of manufactured radioelectronic systems is of relevance.

Methods. The paper examines the a posteriori analysis of RES dependability analysis that is performed after the manufacture of the pilot batch of equipment in order to identify its dependability characteristics. Such tests are necessary because at the design stage the design engineer does not possess complete a priori information that would allow identifying the dependability indicators in advance and with a sufficient accuracy. An important source of dependability information is the system for collection of data on product operational performance. There are two primary types of dependability tests. One of them is the determinative test intended for evaluation of dependability indicators. It is typical for mass-produced products. Another type of test is the control test designed to verify the compliance of a system’s dependability indicator with the specifications. This paper is dedicated to the second type of tests.

Results. The question must be answered whether the product (manufactured RES) dependability characteristics comply with the requirements of the manufacturing specifications. This task is solved with the mathematical tools of the statistical theory of hypotheses. Two hypotheses are under consideration: hypothesis H0, mean time to failure t*=T0 as per the specifications (good product); hypothesis H1, mean time to failure t*=T1<T0, alternative (bad product). The hypothesis verification procedure has a disadvantage that consist in the fact that the quality of the solution is identified after the test. Such procedure of hypothesis verification is not optimal. The paper examines the sequential procedure of hypothesis verification (Wald test) that involves decision-making after each failure and interruption of the test if a decision with specified quality is possible. An algorithm is shown for compliance verification of the resulting sample distribution law with the exponential rule or other distribution law over criterion χ2.

Conclusions. It was shown that the test procedure [n, B, r] ensures the quality of decision identical to that of the procedure [n, V, r] provided the testing time t is identical. Under the sequential procedure, if the number of failures r and testing time are not known from the beginning, a combined method is used (mixed procedure), when additionally the failure threshold limit r0 is defined and the decision rule is complemented with the condition: if r < r0, the sequential procedure is used; if r = r0, normal procedure is used. An algorithm is shown for compliance verification of the resulting sample w1(yi) distribution law with the exponential rule or other distribution law over criterion χ2. The paper may be of interest to radioelectronic systems design engineers.

31-35 1153
Abstract

Aim. The research of potential wide applications of electrical noises in nondestructive testing of electronic devices and theoretic justification of their use for such purposes. To that effect, fundamental electrical noises are examined and the types of those that in principle can be used for nondestructive testing are analyzed.

Methods. The article contains theoretical research finding regarding fluctuation processes behind several types of electrical noise and degradation processes in electronic devices. The connection between the spectral properties of the fluctuations with the characteristics of the degradation processes in electronic devices is analyzed. On this basis, conclusions are made regarding the opportunities of using electrical noises for non-destructive testing of electronic devices. Electrical fluctuation phenomena caused by capture and emission of charge carriers by traps created by structural defects in the solid body structure. The processes of capture and emission of charge carriers by traps are a fundamental cause of the following fundamental types of electrical noise: excess, generationrecombination and burst. Various types of noise significantly differ in terms of the parameters and statistical properties of fluctuation processes. That is the reason for the analysis of electrical fluctuations caused by traps in order to provide a sufficiently general description of such fluctuation phenomena. The work resulted in a rigorous description of the electrical fluctuations caused by traps. A general expression for the fluctuation spectrum was calculated. In special cases, from it we can pass to the spectrums of excess, generation-recombination and bursts noises. The findings regarding the electric fluctuations causes by traps can be used for identification of spectral properties of fluctuations in solid materials and solid-state electronic devices. A rigorous quantitative analysis was made of the degradation processes that occur in solid-state electronic devices in order to establish associations between the spectral characteristics of noises caused by capture and emission of charge carriers by structural defects with the degree of materials defectiveness in order to be able to better exploit the noises in the evaluation of the quality and dependability of electronic devices. It was established that the noise spectral density is associated with the degree and rate of the structure’s degradation. Thus, noises in electronic devices contain information on the degree and rate of degradation. The following practical conclusions were made. The noise spectral density is associated with the number of defects in the device at the baseline, as well as the rate of defect formation and, consequently, the ageing rate of the electronic device. Therefore, noise contains information on the quality of the manufactured device and its characteristics change rate. Accordingly, the noise spectrum can be used in evaluation of an electronic device’s deficiencies, both those occurring during the manufacturing process, and those that manifest themselves in operation.

Conclusions. The paper substantiates the potential wide applications of electrical noises in non-destructive testing of electronic devices, shows the feasibility of using fundamental types of electrical noises for the above purposes. The rigorous substantiation of the use of electrical noises for nondestructive testing of electronic devices, feasibility of evaluation of the devices’ defects caused by various factors, use of common frequently prevailing types of noise, high sensitivity of fluctuation spectroscopy highlight the efficiency of electrical noise in nondestructive testing of electronic devices.

FUNCTIONAL RELIABILITY. THE THEORY AND PRACTICE

36-40 1113
Abstract

Aim. The article is dedicated to the challenges of evaluating the functional dependability of the display unit software (SW) that is part of the BLOK vital integrated onboard system as attributed to program errors within a 24-hour target time. One of the key tasks is the calculation of the values of such SW functional dependability characteristics as accuracy, correctness, security, controllability, reliability, fault tolerance and availability, which are the primary indicators for evaluating the health of safety devices. With all this taken into account, it is to be evaluated whether the checking of the software of the display unit before each trip with a departure test is required.

Method. The reference conditions do not contain statistical data of program executions over the course of its maintenance. There is also no information on the structural characteristics of the program (number of operators, operands, cycles, etc.) which prevents the use of statistical models of dependability, such as the Halstead metrics, IBM model or similar ones. That is why the Schumann model was chosen as the initial data definition apparatus. The method of evaluation of the display unit’s functional dependability is based on the findings of [1].

Results. At the first stage, the following initial data values were defined: initial number of defects in the program, program failure rate and probability of correct run. At the subsequent stage, the identified values were used to define such dependability parameters as probability of no-error as the result of program run within a given time, probability of no-failure of display unit as the result of program run within a given time and mean time to program failure. After the probability РSW (t) of no-error as the result of program run within a given time was calculated, such SW dependability attributes as accuracy, correctness, security and controllability were evaluated. After the probability of no-failure of the display unit РR (t) as the result of program run within a given time was calculated, an evaluation was given to such attributes as SW reliability and fault tolerance, while after the mean time to program failure TavSW was calculated, knowing the mean downtime due to elimination of the program error τpdt, the display unit availability for faultless execution of an information process at an arbitrary point in time Cfa was defined. The calculated partial functional availability coefficients for the display unit have shown that pre-trip checking of the unit and immediate elimination of errors, should such be identified, will enable a significant improvement of user performance of the onboard display unit (BIL) in terms of timely notification of the driver on the current operational situation to enable timely train control decision-making. 

41-47 1153
Abstract
The aim of this article is to develop a model that would allow quantitatively evaluating the function-level fault tolerance of navigation signals provision processes in adverse reception conditions using consumer navigation equipment (CNE). The article also substantiates the relevance and importance of evaluation of the function-level fault tolerance of consumer navigation systems in those cases when the reception of the signals is affected by industrial interference, pseudo-satellites, rereflections from urban structures and terrain features. The function-level fault tolerance of the processes of navigation signals (of CNE) provision to consumers in adverse conditions is understood as their ability to fulfil their functions and retain the allowed parameter values under information technology interference within a given time period. The adverse conditions of provision of navigation data (signals) to consumers are understood as a set of undesirable events and statuses of reception and processing of navigation data with possible distortions. The article analyzes a standard certificate of vulnerabilities of navigation signal (by the example of distortion of pseudorange and pseudovelocity values distortion) that defines the input data for the analysis of CNE equipment fault tolerance. The model is based on the following approaches: the navigation signal parameters are pseudorange and pseudovelocity, system almanac data and ephemeris information; quantitative evaluation of function-level fault tolerance of the processes of navigation signals provision to users is based on the probability of no-failure of CNE in adverse conditions; function-level fault tolerance of the above processes is ensured by means of integrated use of functional, hardware, software and time redundancy; the hardware and software structure of the CNE fault tolerance facilities has the form of a three-element hot and cold standby system; the allowable level of functionlevel fault tolerance violation risk is defined according to the ALARP principle. It is shown that CNE fault tolerance and jamming resistance is based on the following: use of multisystem navigation receivers; navigation signal integrity supervision; spatial and frequency-time selection of signal; precorrelation processing of signal and interference mixture; postcorrelation signal processing; processing of radio-frequency and information parameters of the signal; cryptographic authentication; integration with external sources of navigation information and within a single signal processing system of a number of methods of interference countermeasures and pseudo-satellite navigation signals. The proposed model defines the CNE function-level fault tolerance as two variants of dynamic dependability models, in which the values of probability of no-failure are time-dependent: a hot standby system that includes three additional countermeasure modules and a cold standby system with a switch to three additional countermeasures modules. The model allows visualizing the processes of navigation signals provision to users in adverse conditions, quantitatively evaluating the probability of no-failure for hot and cold standby systems with three modules of information technology interference countermeasures, probability of recovery and CNE availability coefficient, as well as the allowable risk of CNE fault tolerance violation.

FUNCTIONAL SAFETY. THE THEORY AND PRACTICE

48-55 1304
Abstract

Aim. Fire safety of a protection asset is the state of a protection asset that is characterized by the capability to prevent the occurrence and development of fire, as well as the effects of hazardous factors of fire on people and property [1]. The traction rolling stock (TRS) is one of the primary protection assets on railway transport. Managing TRS fire safety involves a large volume of information on various TRS types: possible fire-hazardous conditions, fire safety systems, parameters of TRS-related processes. That means that efficient management must be built upon analysis that allows identifying trends and factors of fire hazard development. The analysis should be organized in such a way as to allow its results to be used in evaluation of composite safety indicators [2]. The required applied nature of such analysis is also obvious. Given the above, it should be noted that the applied research indirectly solves the task of using the results of fundamental research to address not only cognitive, but also societal issues [3]. The aim of this article is to structure the most efficient applied and theoretic methods of analysis and to develop a structure for systems analysis of TRS fire safety.

Methods. The multitude of factors that affect the condition of TRS can be divided into two groups: qualitative and quantitative. Importantly, it is impossible to completely research the impact of all the elements of a complex technical system that is TRS on fire safety. We have to examine a part of the whole, i.e. a sample, and then use probabilistic and statistical methods to extrapolate the findings of sample examination to the whole [4]. An analysis of a data set requires a correctly defined sample. At this stage, the quality of information is the most important criterion. The list of raw data was defined based on the completeness of the description, reliability of the sources. Then, in a certain sequence, the data was analyzed by means of qualitative and semi-quantitative methods. First, given the impossibility to establish evident connections (destroyed by the hazardous effects of fire) between the condition of units that preceded the fire, the Pareto analysis was used. The research involved root cause analysis (Ishikawa diagram). Subsequently, cluster analysis of fire-hazardous situations was used. The main purpose of cluster analysis consists in establishing generic sequences of events that entail TRS fires. For that purpose, a description of possible fire-hazardous states of traction rolling stock is required, i.e. a multitude of events and states must be described. Dependability analysis can be successfully performed by representing the safety state information in terms of the theory of sets [5]. The sets of hazardous fire-related TRS events are represented in the form of partially ordered sets. Processing of such sets that are non-numeric in their nature cannot be performed by means of statistical procedures based on addition of parametric data. For that reason the research used mathematical tools based on the notion of type of distance. A part of data that have quantitative characteristics was analyzed statistically.

Results. The TRS fire safety data analysis methods presented in this article that include methods of numeric and non-numeric data processing allowed developing a formatted list of fire hazard factors that enable the creation of a practical method of TRS fire risk calculation. An algorithm is proposed for application of qualitative and quantitative methods of analysis of data of various numerical natures. An example is given of the algorithm’s application in the analysis of diesel engine fire safety. The proposed method can be used for analyzing anthropogenic safety in terms of listing the factors involved in risk assessment.

REPORTS



ISSN 1729-2646 (Print)
ISSN 2500-3909 (Online)