Preview

Dependability

Advanced search
Vol 22, No 1 (2022)
View or download the full issue PDF (Russian) | PDF

STRUCTURAL RELIABILITY. THE THEORY AND PRACTICE

4-12 644
Abstract

Aim. The paper analyses the effect of information redundancy on the functional dependability indicators of distributed automated information systems. Information redundancy in the form of hot standby and HDD archives located in the system nodes is examined. Methods. The concepts of the probability theory and Markov processes are employed. Results. Indicators of operational dependability of distributed information systems and the effect of operational and recovery redundancy of data sets on these indicators are analysed. The paper analyses the efficiency of three backup strategies in distributed systems. Conclusions. Using information redundancy significantly improves the dependability and operational efficiency of distributed systems. At the same time, this type of redundancy requires a certain increase in operating costs.

13-19 383
Abstract

Aim. The paper examines technical systems (machinery and equipment), whose condition deteriorates in the course of operation, yet can be improved through repairs (overhaul). The items are subject to random failures. After another failure, an item can be repaired or disposed of. A new or repaired item is to be assigned the date of the next scheduled repairs. Regarding a failed item, the decision is to be taken as to unscheduled repairs or disposal. We are solving the problem of optimization of such repair policy. At the same time, it proves to be important to take into consideration the effect of repairs, first, on the choice of appropriate indicators of item condition that define its primary operational characteristics, and second, on a sufficiently adequate description of the dynamics of items’ performance indicators. Methods. Assigning the timeframe of scheduled repairs normally involves the construction of economic and mathematical optimization models that are the subject matter of a vast number of publications. They use various optimality criteria, i.e., probability of no failure over a given period of time, average repair costs per service life or per unit of time, etc. However, criteria of this kind do not take into account the performance dynamics of degrading items and do not fully meet the business interests of the item owners. The criterion of maximum expected total discounted benefits is more adequate in such cases. It is adopted in the theory of investment projects efficiency estimation and the cost estimation theory and is, ultimately, focused on maximizing a company’s value. The model’s formulas associate the item’s benefit stream with its primary characteristics (hazard of failure, operating costs, performance), which, in turn, depend on the item’s condition. The condition of non-repairable items is usually characterized by their age (operating time). Yet the characteristics of repairable items change significantly after repairs, and, in recent years, their dynamics have been described by various models using Kijima’s virtual age indicator (a similar indicator of effective age has long been used in the valuation of buildings, machinery and equipment). That allows associating the characteristics of items in the first and subsequent inter-repair cycles. However, analysis shows that this indicator does not allow taking into consideration the incurable physical deterioration of repaired items. The paper suggests a different approach to describing the condition of such objects that does not have the above shortcoming. Conclusions. The author constructed and analysed an economic and mathematical model for repair policy optimisation that is focused on maximizing the market value of the company that owns the item. It is suggested describing the condition of an item with two indicators, i.e., the age at the beginning of the current inter-repair cycle and time of operation within the current cycle. It proves to be possible to simplify the dependence of an item’s characteristics on its condition by using the general idea of Kijima models, but more adequately taking into consideration the incurable physical deterioration of such item. The author conducted experimental calculations that show a reduction of the duration of planned repairs as machinery ages at the beginning of an inter-repair cycle. Some well-known repair policies were critically evaluated.

20-29 1640
Abstract

Aim. To examine the design engineering approach to ensuring specified dependability on the basis of engineering disciplines and design engineering methods of quality and dependability assurance using the case of unique, highly critical products with short operation life. Such approach, unlike the statistical procedures of modern dependability, allows associating the dependability indicator calculations with the calculated operability parameters and established design criteria that are to be met in order to confirm the specified dependability indicators for products with an indefinite number of critical elements, each of which operates according to a functional principle that is different in its nature. Methods. The paper examined the prerequisites for the implementation of the design engineering approach to dependability, such as the distinctive features of ensuring the dependability of unique, highly critical products with short operation life, the applicability of design engineering approach to dependability, the effect of the genesis on the assurance of design engineering dependability, behavioural models of technical products in terms of dependability and specifics of highly critical product calculation. It was identified that, for items with high specified probability of no failure exceeding three-sigma random value variation interval, dependability is to be calculated not by identifying the dependability function, but rather by proving that undependability function is below the acceptable value, which ultimately ensures the specified dependability. Such approach enables the development of methods of early failure prevention using procedures of design engineering analysis of dependability for the purpose of achieving the required parameters of functionality, operability and dependability of products on the basis of a generalised parametric functional model. Results. The design engineering analysis of dependability allows substantiating the criteria for error-free design (selection of sound principles of operability and validation of engineering solutions for achieving the required dependability indicators). The effect of the error-free engineering criteria combined with the criteria for defect-free engineering (observance of the generally accepted principles, rules, requirements, norms and standards of drawing generation) and defect-free manufacture (strict adherence to the requirements of drawings with no deviation permits) enables a designer to achieve the specified dependability values without using the statistical methods of the modern dependability theory. Conclusion. Dependability as a comprehensive property is characterised by a probability that, on the one hand, determines the rate of possible failures, and, on the other hand, indicates the number of errors that were made by engineers during the design, manufacture and operation of products and can lead to failures. Additionally, the failure rate is determined by the engineers’ efforts to eliminate or mitigate the consequences of possible failures at each life cycle stage. The greater and earlier are such efforts adopted, the higher the product’s dependability will be. Ultimately, dependability is determined by consistent and rigorous implementation of error-free design, defect-free design and defect-free manufacture procedures whose efficiency is in no way associated with the number of manufactured products. Their efficiency and effectiveness are determined by specific decisions and actions by the engineers who make sure that the product performs the required functions with the specified dependability in the established modes and conditions of operation. Ensuring that only takes using engineering disciplines, as well as design engineering methods for quality and dependability assurance.

30-37 568
Abstract

The perfect case estimation scenario involves unbiased estimation with minimal variance, if such estimate exists. Currently, there are no means of obtaining unbiased estimates (if they do exist!). For instance, a maximum likelihood estimate (NBT test plan) of a mean time to failure Tmn = (total operation time)/(number of failures) is highly biased. Those involved in solving applied problems are not satisfied with the situation. Efficient unbiased estimates are used whenever such are available. If it is impossible to find an efficient unbiased estimate in terms of standard deviation, then biased estimate comparison is to be mastered. The vast majority of problems is associated with biased estimates. Within the class of biased estimates, estimates with minimal bias are to be sought, and, among the latter, those with minimal bias. Such estimates in the class of biased estimates should be called bias-efficient or simply efficient, which does not contradict the conventional definition, but only extends it. Such search process guarantees that the obtained estimates are highly accurate. However, with this definition of a bias-efficient estimate, there will always be a pair of compared estimates, in which the total bias of one estimate is slightly higher than that of the other, the same being the case with the total variances of such estimates, but in a different order. In this setting, a formal selection of a bias-efficient estimate becomes impossible and is arbitrary, i.e., the test engineer selects a bias-efficient estimate intuitively. In this case, the test engineer’s choice may prove to be incorrect. Thus arises the problem of constructing a criterion of efficiency that would enable a formal selection of a bias-efficient estimate. The Aim of the paper. The paper aims to build an efficiency criterion, using which the choice of a bias-efficient estimate is unambiguously defined through computation. Methods of research. To find the bias-efficient estimate, we used integral numerical characteristics of the accuracy of the estimate, namely, the total square of the offset of the expected implementation of a certain variant estimate from the examined parameters of the distribution laws, etc. Conclusions. 1) For the binomial plan and the test plan with recovery and limited test time, performance criteria were constructed that allow unambiguously identifying the bias-efficient estimate out of the submitted estimates. 2) Based on the constructed performance criteria for various test plans, bias-efficient estimates were selected out of the submitted ones.

38-43 336
Abstract

The paper examines the correlations between states and events that are used in the construction of process diagrams that describe the dependability of items. Based on the constructed state and event diagram, input data is generated and the mathematical method is selected that is implemented in accordance with the problem at hand. The distinctive features and advantages of the matrix method are presented. Aim. To improve the simulation methods by clarifying the correlation between states and events and using matrix methods of calculation. Methods. The examined causal relationships between states and events allowed establishing correlations between them, i.e., an event can be the cause of a state change, then a state change is a consequence; a state can be the cause of an event, then an event is a consequence of a state. Under this approach, an event can cause a state change, while at the same time an event is a consequence of a state. The situation with states is similar. A state can be the cause of an event, while at the same time a state is the consequence of an event. It is also noted that a single state may cause a number of events, while an event can also cause a number of states. Examples of such correlations are given. It is noted that the duration of a state can be constant, random or zero. The examined correlations between states and events enable a substantiated construction of a diagram of states and transitions. A substantiated construction of a diagram of states and transitions results from a conceptual model, in which all states and events are given a physical and technical interpretation that transforms into a formal state-transition diagram. A special attention is given to the matrix methods that have a number of advantages, i.e., compactness and simplicity of converting the input characteristics into output characteristics, availability of standard software, use of verification procedures, feasibility of implementation using standard computer-based tools. The input data is also generated in matrix form. The paper indicates the characteristics of a state-transition diagram that can be calculated from the input data. Note is made of the use of methods based on semi-Markov processes. The author points out that, while using matrix methods, cycles should be generated. A relevant matter associated with the large number of states and the consequent problem of aggregation of states is touched upon. Two approaches to the aggregation of states are set forth that allow keeping the system’s output characteristics unchanged. Results. A proposal is formulated for the construction of a dependability model involving a number of stages, i.e., definition of the goal of simulation with the indication of the used dependability indicators, description of the conceptual model, construction of a substantiated state-transition diagram, selection of the mathematical method, calculations, discussing the findings, conclusions and suggestions based on the performed simulation. Discussion and conclusions. A dependability model should take into consideration the causal relationships between states and events that are established based on the physical, as well as the engineering and technical features of the item. Taking these relationships into account, a state diagram is generated that enables initial data compilation. The matrix method is efficient and has a number of useful features. The above considerations are methodological in their nature. They can be helpful for generating dependability models of technical systems and studying the dependability theory in educational institutions.

FUNCTIONAL RELIABILITY. THE THEORY AND PRACTICE

44-51 447
Abstract

Aim. The paper examined the matter of assessment of the functional dependability of compressor stations (CS) of underground gas storage (UGS) facilities. A definition of CS functional dependability and guidelines for its assessment were proposed. Methods. Design calculation of compressor stations, scenario analysis. Results. The paper presents: a) a definition, indicators of CS functional dependability and guidelines for its assessment; b) an example of the guidelines application for UGS CS; c) a comparative analysis of UGS CS functional dependability in a number of various versions: use of single-unit and two-unit centrifugal compressors as part of gas turbine gas pumping units for two-stage compression with intercooling. Conclusion. The paper shows the requirement to analyse the functional dependability of various versions of UGS CS for the purpose of identifying the most rational option that ensures unconditional performance of the key UGS CS function under uncertain initial design data.

52-55 295
Abstract

Aim. The paper examines the problem of small sample analysis by means of synthesizing new statistical tests generated by the clustering of the Hurst statistical test with the Frozini test, as well as with the Murota-Takeuchi test. The problem of normal distribution hypothesis testing on samples of 16 to 25 experiments is solved. Such significant limitations of the sample size arise in subject areas that include biometrics, biology, medical science and economics. In this case, the problem can be solved by applying not one, but a number of statistical tests to the analysis of the same small sample. Methods. It is suggested multiplying the Hurst test outputs by the Frozini test and/or the Murota-Takeuchi test outputs. A multiplicative clustering was performed for pairs of examined tests and their combination. It was shown that for each known statistical test, an equivalent artificial neuron can be constructed. A neural network integration of about 21 classical statistical tests constructed in the last century becomes possible. It is expected that the addition of new statistical tests in the form of artificial neurons will improve the quality of multi-criteria analysis solutions. Formally, the products of non-recurrent pairs of 21 original classical statistical tests should produce 210 new statistical tests. That is significantly more than the total number of statistical tests developed in the last century for the purpose of normality testing. Results. Pairwise product of the examined tests allows reducing the probability of errors of the first and second kind by more than 1.55 times as compared to the basic Hurst test. In case of triple product of the tests, the probabilities of error decrease relative to the basic Hurst test and to the associated second test. It is noted that there is no steady improvement in the quality of the decisions made by multiplicative mathematical constructions. The probabilities of error of the new test obtained by multiplying three of the examined tests are approximately 1.5% worse as compared to those of the tests obtained by multiplying pairs of the original tests. Conclusions. By analogy with the examined tests, the proposed data processing methods can also be applied to other known statistical tests. In theory, it becomes possible to significantly increase the number of new statistical tests by multiplying their final values. Unfortunately, as the number of clustered statistical tests grows, mutual correlations between the newly synthesized tests grow as well. The latter fact limits the capabilities of the method proposed in the paper. Further research is required in order to identify the most efficient combinations of pairs, triples or large groups for known statistical tests.

56-62 520
Abstract

Aim. The development of digital technology brings about the need to digitize data with their subsequent storage on digital media. Regardless of how information is stored, it is of value and its loss may cause harm. There are a number of preventive measures (hardware and logical redundancy of various types) to prevent such loss. Should the preventive measures – due to certain reasons and circumstances – fail to protect the data and access to the latter was lost, it must be recovered in a complete and timely manner. In this context, a need arises for a data recovery algorithm that would take into account the hardware features of today’s storage media, their logical structure, as well as the specificity of the stored data. Methods. There are two approaches to information recovery, i.e., all-purpose and personal. The all-purpose approach involves using a minimal number of programs and tools that work with all items. The personal approach implies a large number of programs and tools that address specific issues associated with the loss of access to information. That enables a faster, higher-quality recovery as compared to the all-purpose approach. Additionally, personal programs are normally cheaper than all-purpose software. All-purpose information recovery tools do not provide quality results when applied to large numbers of failure scenarios. A single utility may not be enough for resolving all issues caused by an incident. A readily available template for obtaining an acceptable result does not exist either. Aside from personal software, there are other alternatives to the all-purpose approach, i.e., the manual data recovery programs and hardware and software systems. In cases of minor logical faults (master boot record corruption) manual data recovery software is used. If a drive is affected by critical hardware issues, hardware and software systems are used. Results. A method of recovering data on storage media of various types was created. It includes an all-purpose and a personal approaches to information recovery, use of software for manual data recovery, as well as hardware and software systems. The method allows recovering data of popular extensions from common file systems and storage media. Compatibility with RAID arrays of all levels is provided. Programs were selected out of eight sets using the analytic hierarchy process with the priority given to the performance criterion. The method was submitted to a number of tests. Testing involved emulation of incidents associated with the loss of access to data. The cost of eliminating various incidents using the developed methodology is estimated. Conclusions. Based on the obtained test results, conclusions are set forth regarding the efficiency of the personal approach to information recovery.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1729-2646 (Print)
ISSN 2500-3909 (Online)