Journal of Aerospace Engineering and Mechanics

ISSN: 2578-6350

 Editor-in-chief

  Dr. Ephraim Suhir
  Portland State University,   USA

Review Article | VOLUME 4 | ISSUE 2 | DOI: 10.36959/422/444 OPEN ACCESS

The Outcome of an Engineering Undertaking of Importance Must be Quantified to Assure its Success and Safety: Review

E Suhir

  • E Suhir 1,2,3,4,5*
  • Physical Sciences and Engineering Research Division, Bell Laboratories, Murray Hill, NJ, USA
  • Departments of Mechanical and Material and Electronic and Computer Engineering, Portland State University, USA
  • Department of Applied Electronic Materials, Institute of Sensors and Actuators, Technical University, Austria
  • Mackay Institute of Research and Innovation, James Cook University, Australia
  • ERS Co, Los Altos, USA

Suhir E (2020) The Outcome of an Engineering Undertaking of Importance Must be Quantified to Assure its Success and Safety: Review. J Aerosp Eng Mech 4(2):218-252.

Accepted: August 21, 2020 | Published Online: August 23, 2020

The Outcome of an Engineering Undertaking of Importance Must be Quantified to Assure its Success and Safety: Review

Abstract


The outcome a of crucial engineering undertaking must be quantified at the design/planning stage to assure its success and safety, and since the probability of an operational failure is, in effect, never zero, such a quantification should be done on the probabilistic basis. Some recently published work on the probabilistic predictive modeling (PPM) and probabilistic design for reliability (PDfR) of aerospace electronic and photonic (E&P) products, including human-in-the-loop (HITL) problems and challenges, is addressed and briefly reviewed. The effort was lately "brought down to earth" to model possible collision in automated driving (AD). In addition, some problems and tasks beyond the E&P and vehicular engineering field are also addressed with an objective to show how the developed methods and approached can be effectively and fruitfully employed, whenever there is a need to quantify the reliability of an engineering technology with consideration of the human performance.

Accordingly, the following nine problems have been addressed in this review with an objective to show how the outcome of a critical engineering endeavor can be predicted using PPM and PDfR concept: 1) Accelerated testing in E&P engineering: significance, attributes and challenges; 2) Failure-oriented accelerated testing (FOAT), its objective and role; 3) PPM approach and PDfR concept, their roles and applications; 4) Kinetic multi-parametric Boltzmann-Arrhenius-Zhurkov (BAZ) equation as the "heart" of the PDfR concept; 5) Burn-in-testing (BIT) of E&P products with an attempt to shed light on the basic "to BIT or not to BIT" question; 6) Adequate trust is an important constituent of the human-capacity-factor (HCF) affecting the outcome of a mission or an extraordinary situation; 7) PPM of an emergency-stopping situation in automated driving (AD) or on a railroad (RR); 8) Quantifying the astronaut's/pilot's/driver's/machinist's state of health (SoH) and its effect on his/hers performance; 9) Survivability of species in different habitats. The objective of the latter effort is to demonstrate that the developed PPM approaches and methodologies, and particularly those using multiparametric BAZ equation, could be effectively employed well beyond the vehicular engineering area.

The general concepts are illustrated by numerical examples. All the considered PPM problems were treated using analytical ("mathematical") modeling. The attributes of the such modeling, the background of the multiparametric BAZ equation and the ten major principles ("the ten commandments") of the PDfR concept are addressed in the appendices.

Acronyms


AD: Automated Driving; ASD: Anticipated Sight Distance; BAZ: Boltzmann-Arrhenius-Zhurkov (equation); BGA: Ball Grid Array; BIT: Burn-in Testing; BTC: Bathtub Curve; CTE: Coefficient of Thermal Expansion; DEPDF: Double-Exponential Probability Distribution Function; DfR: Design for Reliability; E&P: Electronic and Photonic; FOAT: Failure-Oriented-Accelerated-Testing; FoM: Figures of Merit; HALT: Highly-Accelerated-Life-Testing; HCF: Human Capacity Factor; HE: Human Error; HF: Human Factor; HITL: Human-in-the-Loop; IMP: Infant Mortality Portion (of the BTC); MTTF: Mean-Time-to-Failure; MWL: Mental Workload; NIH: "Not invented here"; PAM: Probabilistic Analytical Modeling; PDF: Probability Distribution Function; PDfR: Probabilistic Design for Reliability; PHM: Prognostics and Health Monitoring; PPM: Probabilistic Predictive Modeling; PRA: Probabilistic Risk Analysis; QT: Qualification Testing; RR: Railroad; RUL: Remaining Useful Life; SAE: Society of Automotive Engineers; SF: Safety Factor; SJI: Solder Joint Interconnections; SoH: State of Health; SFR: Statistical Failure Rate; TTF: Time-to-Failure

Introduction


Quo vadis?

St. Paul

Progress in vehicular safety is achieved today mostly through various, predominantly experimental and posteriori- statistical, ways to improve the hard- and software of the instrumentation and equipment, implement better ergonomics, and introduce and advance other more or less well established efforts of experimental reliability engineering and traditional human psychology that directly affect product's reliability and human performance. There exists, however, a significant potential for the reduction in accidents and casualties in aerospace, maritime, automotive and railroad engineering through better understanding the role that various uncertainties play in the planner's and operator's worlds of work, when never failure-free navigation equipment and instrumentation, never hundred percent predictable response of the object of control (air- or spacecraft, a car, a train, or an ocean-going vessel), uncertain-and-often-harsh environment and never-perfect human performance contribute jointly to the outcome of a vehicular mission or an extraordinary situation. By employing quantifiable and measurable ways of assessing the role and significance of critical uncertainties and treating HITL as a part, often the most crucial part, of a complex man-instrumentation-vehicle-environment-navigation system and its critical interfaces, one could improve dramatically the state-of-the-art in assuring operational safety of a vehicle and its passengers and crew. This can be done by predicting, quantifying and, if necessary and possible, even specifying an adequate (typically low enough, but different for different vehicles, missions and circumstances) probability of success and safety of a mission or an off-normal situation [1-19].

Nothing and nobody is perfect, and the difference between a highly reliable technology, object, product, performance or a mission and an insufficiently reliable one is "merely" in the levels of their never-zero probability of failure. Application of the PPM approach and the PDfR concept [20-31] provide a natural and an effective means for reduction of vehicular casualties. This approach, as has been indicated, can be applied also beyond the vehicular field, in devices whose operational reliability is critical, such as, e.g., military, long-haul communications systems or medical devices [32]. When success and safety of a critical undertaking are imperative, ability to predict and quantify its outcome is paramount. The application of the PDfR concept can improve dramatically the state-of-the-art in reliability and quality engineering by making the art of creating reliable products and assure adequate human performance into a well substantiated and "reliable" science. Tversky and Kahneman (1979 Nobel Memorial Prize in Economics) [33] where, perhaps, the first who indicated the importance of considering the role of uncertainties in decision making and, particularly, in analyzing the role of cognitive biases that affect decision making in life and work. Since, however, these investigators were, although outstanding, but traditional human psychiatrists, no quantitative, not to mention probabilistic, assessments, were suggested.

It should be pointed out that while the traditional statistical human-factor-oriented approaches are based mostly on experimentations followed by statistical analyses, an important feature of the PDfR concept is that it is based upon, and start with, a physically meaningful and flexible predictive model (such as the BAZ one) geared to the appropriate FOAT [34-37]. Statistics and/or experimentation can be applied afterwards, to establish the important numerical characteristics of the selected model (such as, say, the mean value and the standard deviation in a normal distribution) and/or to confirm the suitability of a particular model for the application of interest. The highly-focused and highly cost-effective FOAT, the "heart" of the PDfR concept, is aimed, first of all, at understanding and/or at confirming the anticipated physics of failure (see Table 1 below). The traditional, about forty-years-old, highly accelerated life testing (HALT), although sheds important light on the reliability of the E&P product of interest (bad things would not last for forty years, would they?), does not quantify reliability and, because of that could hardly improve our understanding of the device's and/or package's physics of failure. FOAT, geared to a physically meaningful PDfR model, can be used as an appropriate extension and modification of HALT. An important attribute of the PPM/PDfR/FOAT based approach is if the predicted probability of non-failure, based on the applied PDfR methodology and FOAT effort, is, for whatever reason, not acceptable, then an appropriate sensitivity analysis (SA) using the already developed and available algorithms and calculation procedures can be effectively conducted to improve the situation without resorting to additional expensive and time-consuming testing.

Such a cost-effective and insightful approach is applicable, with the appropriate modifications and generalizations, if necessary, to numerous, not even necessarily in the vehicular domain, when a human-in-control encounters an uncertain environment or a hazardous situation. The suggested quantification-based HITL approach is applicable also when there is an incentive to quantify human's qualifications and/or when there is a need to assess and possibly improve human performance and possible role in a particular engagement.

An important additional consideration in favor of quantification of the reliability has to do with the always desirable optimizations. The best engineering product is, in effect, as is known, the best compromise between the requirements for its reliability, cost effectiveness and time-to-market (to completion). The latter two requirements are always quantified. No effective optimization could be achieved, of course, if reliability is not optimized as well. In the HITL situations, such an optimization should be done considering the role of the human factor.

In the review that follows some important problems and tasks associated with assuring success and safety of vehicular and other engineering undertakings are addressed with an objective to show what could and should be done differently, when high reliability is imperative, and should be quantified to assure its adequate level and cost effectiveness. A simple example on how to optimize reliability [38] indicates that optimization of reliability can be achieved by optimizing the product's availability - the probability that the product is sound, i.e. available to the user, when needed. When encountering a particular reliability problem at the design, fabrication, testing, or an operation stage of the product's life, and considering the use of predictive modeling to assess the seriousness and the likely consequences of its detected failure, one has to choose whether a statistical, or a physics-of-failure-based, or a suitable combination of these two major modeling tools should be employed to address the problem of interest and to decide on how to proceed.

A three-step concept (TSC) is suggested as a possible way to go in such a situation [39,40]. The classical statistical Bayes formula can be used at the first step as a technical diagnostics tool, with an objective is to identify, on the probabilistic basis, the faulty (malfunctioning) device(s) from the obtained signals ("symptoms of faults"). The multi-parametric BAZ model can be employed at the TSC's second step to assess the remaining useful life (RUL) of the faulty device(s). If the assessed RUL is still long enough, no action might be needed, but if it is not, corrective restoration action becomes necessary. In any event, after the first two steps are carried out, the device is put back into operation, provided that the assessed probability of its continuing failure-free operation is found to be satisfactory. If failure nonetheless occurs, the third step should be undertaken to update reliability. Statistical beta-distribution, in which the probability of failure itself is treated as a random variable, is suggested to be used at this step. The suggested concept is illustrated by a numerical example geared to the use of the prognostics-and-health-monitoring (PHM) effort in actual operation, such as, e.g., en-route flight mission.

The major principles of an analytical modeling approach, the background and the attributes of the BAZ equation [41-44] and the major principles of the PDfR concept are summarized in Appendix A, Appendix B and Appendix C (the latter - in the form of "the ten commandments"), respectively.

Review


Accelerated testing in electronics and photonics: Significance, attributes and challenges

"Golden rule of an experiment: The duration of an experiment should not exceed the lifetime of the experimentalist".

Unknown experimental physicist

Accelerated testing is both a must and a powerful tool in E&P manufacturing. This is because getting maximum reliability information in minimum time and at minimum cost is the major goal of an E&P manufacturer, but also because it is impractical to wait for failures, when the lifetime of a typical today's E&P product manufactured using the existing "best practices" is hundreds of thousands of hours, regardless of whether this lifetime is or is not be predicted with sufficient accuracy. Different types of accelerated tests in today's E&P engineering are summarized in Table 1.

A typical example of product development testing (PDT) is shear-off testing conducted when there is a need to determine the most feasible bonding material and its thickness, and/or to assess its bonding strength and/or to evaluate the shear modulus of the material. HALT is currently widely employed, in different modifications, with an intent to determine the product's reliability weaknesses, assess its reliability limits, and ruggedize the product by applying elevated stresses (not necessarily mechanical and not necessarily limited to the anticipated field stresses) that could cause field failures, and to provide supposedly large (although, actually, unknown) safety margins over expected in-use conditions. HALT often involves step-wise stressing, rapid thermal transitions, and other means that enable to carry out testing in a time- and cost- effective fashion. HALT is a "discovery" test. It is not a qualification test (QT) though, i.e. not a "pass/fail" test. It is the QT that is the major means for making a viable E&P device or package into a justifiably marketable product. While many HALT aspects are different for different manufacturers and often kept as proprietary information, QTs and standards are the same for the given industry and product type. Burn-in testing (BIT) is a post-manufacturing test. Mass fabrication, no matter how good the design effort and the fabrication technologies are, generates, in addition to desirable and relatively robust ("strong") products, also some undesirable and unreliable ("weak") devices ("freaks"), which, if shipped to the customer, will most likely fail in the field. BIT is supposed to detect and to eliminate such "freaks". As a result, the final bathtub curve (BTC) of a product that underwent BIT is not expected to contain the infant mortality portion (IMP). In the today's practice BIT, a destructive test for the "freaks" and a non-destructive for the healthy devices, is often run within the framework of, and concurrently with, HALT.

Are the today's practices based on the above accelerated testing adequate? A funny, but quite practical, definition of a sufficiently robust E&P product is that "reliability it is when the customer comes back, not the product". It is well known, however, that E&P products that underwent HALT, passed the existing QTs and survived BIT often prematurely fail in the field. So, what could and should be done differently?

Failure-oriented-accelerated-testing (FOAT), its objective and role

"Say not, "I have found the truth," but rather, "I have found a truth."

Kahlil Gibran, Lebanese artist, poet and writer

One crucial shortcoming of the today's E&P reliability assurance practices is that they are seldom based on good understanding the underlying reliability physics for the particular E&P product, but most importantly, although claim its lifetime, do not suggest a trustworthy effort to quantify it. A possible way to go is to design and conduct FOAT aimed, first of all, at understanding and confirming the anticipated physics of failure, but also on using the FOAT data to predict the operational reliability of the product (last column in Table 1). To do that, FOAT should be geared to an adequate, simple, easy-to-use and physically meaningful predictive model. BAZ (see Appendix B and section 3 below) model can be employed in this capacity.

Predictive modeling has proven for many years to be a highly useful and a highly time- and cost-effective means for understanding the physics of failure in reliability engineering, as well as for designing the most effective accelerated tests. It has been recently suggested that a highly focused (on the most vulnerable material and/or structural element of the design, such as, e.g., solder joint interconnections) and, to an extent possible, highly cost effective FOAT is considered as the experimental basis, the "heart", of the new fruitful, flexible and physically meaningful design-for-reliability concept - PDfR (see next section for details). FOAT should be conducted in addition to, and, in some cases, even instead of, HALT, especially when developing new technologies and for new products, whose operational reliability is, as a rule, unclear and for which no experience is accumulated yet and no best practices nor suitable HALT methodologies are not yet developed. Quantitative estimates based on the FOAT and subsequent PPM might not be perfect, at least at the beginning, but it is still better to pursue this effort rather than to turn a blind eye on the never-zero probability of the product's failure and that the reliability of an E&P product cannot be assured, if this probability is not assessed and made adequate for the given product. If one sets out to understand the physics of failure to create, in accordance with the "principle of practical confidence", a failure-free product, conducting FOAT is imperative to confirm the usage of a particular predictive model, such as BAZ equation, to confirm the physics of failure, and establish the numerical characteristics (activation energy, time constant, sensitivity factors, etc.) of the selected model.

FOAT could be viewed as an extension of HALT, but while HALT is a "black box", i.e., a methodology which can be perceived in terms of its inputs and outputs without clear knowledge of the underlying physics and the likelihood of failure, FOAT is a "transparent box", whose objective is to confirm the use of a particular reliability model. While HALT does not measure (does not quantify) reliability, FOAT does. The major assumption is, of course, that this model should be valid for both accelerated and actual operation conditions. HALT that tries to "kill many unknown birds with one (also not very well known) stone" has demonstrated, however, over the years its ability to improve robustness through a "test-fail-fix" process, in which the applied stresses (stimuli) are somewhat above the specified operating limits. This "somewhat above" is based, however, on an intuition, rather than on a calculation.

There is a general, and, to great extent, justified, perception that HALT is able to precipitate and identify failures of different origins. HALT can be used therefore for "rough tuning" of product's reliability, and FOAT could be employed when "fine tuning" is needed, i.e., when there is a need to quantify, assure and even specify the operational reliability of a product. The FOAT based approach could be viewed as a quantified and reliability physics oriented HALT. The FOAT approach should be geared to a particular technology and application, with consideration of the most likely stressors. FOAT and HALT could be carried out separately, or might be partially combined in a particular AT effort. New products present natural reliability concerns, as well as significant challenges at all the stages of their design, manufacture and use. An appropriate combination of HALT and FOAT efforts could be especially useful for ruggedizing and quantifying reliability of such products. It is always necessary to correctly identify the expected failure modes and mechanisms, and to establish the appropriate stress limits of HALTs and FOATs with an objective to prevent "shifts" in the dominant failure mechanisms. There are many ways of how this could be done. E.g., the test specimens could be mechanically pre-stressed, so that the temperature cycling could be carried out in a more narrow range of temperatures [45]. But a better way seems to be replacement of temperature cycling with a more cost-effective, less time consuming and, most importantly, more physically meaningful accelerated test, such as low-temperature/random-vibrations bias (see section 4.3).

PPM approach and PDfR concept, their roles and applications

"A pinch of probability is worth a pound of perhaps."

James G. Thurber, American writer and cartoonist

Design for reliability (DfR) is, as is known, a set of approaches, methods and best practices that are supposed to be used at the design stage of a product to minimize the risk that the fabricated product might not meet the reliability objectives and customer expectations. When deterministic approach is used, the safety factor (SF) is defined as the ratio SF= C D of the capacity ("strength") C of the product to the demand ("stress") D. When PDfR approach is considered, the SF can be introduced as the ratio SF =  ψ s ^ of the mean value ψ of the safety margin SM = Ψ = C - D to its standard deviation s ^ . In this analysis, having in mind the application of the BAZ equation, the probability P of non-failure is used as the suitable measure of the product's reliability. Here are several simple PDfR practical examples.

Reliable seal glass bond in a ceramic package design: AT&T ceramic packages fabricated at its Allentown (former "Western Electric") facility in mid-nineties experienced numerous failures during accelerated tests. It has been determined that this happened because the seal/solder glass that bonded two ceramic parts had a higher coefficient of thermal expansion (CTE) than the ceramic lid and substrate, and therefore, when the packages were cooled down from the high manufacturing temperature of about 800-900 ℃ to the room temperature, all the packages cracked. To design a reliable seal we had not only to replace the existing seal glass with a glass material that would have a lower CTE than the ceramics, but, in addition to that, we had to make sure that the interfacial shearing stresses at the ceramics/glass interfaces subjected to compression at low temperatures would be low enough not to crack the seal glass material. Treating the CTE's of the brittle ceramic and brittle glass materials as normally distributed random variables, the following PDfR methodology was developed and applied. No failures were observed in the manufactured packages, designed and manufactured based on the developed methodology. Here is how a reliable seal glass material was selected in a ceramic IC package using this PDfR approach [46].

The maximum interfacial shearing stress in a thin solder glass layer in a ceramic package design can be computed as τmax = khgσmax. Here k =  λ κ is the parameter of the interfacial shearing stress, λ =  1 v c E c h c  +  1 v g E g h g is the axial compliance of the assembly, κ =  h c 3 G c  +  h g 3 G g is its interfacial compliance, G c  =  E c 2 1 +  v c ,  G g  =  E g 2 1 +  v g are the shear moduli of the ceramics and glass materials, σ max  =  αΔt λ h g is the maximum normal stress in the mid-portion of the glass layer, ∆t is the change in temperature from the soldering temperature to the low (room or testing) temperature, Δα =  α ¯ c  -  α ¯ g is the difference in the effective CTEs of the ceramics and the glass, α ¯ c,g  =  1 Δt t t 0 α c,g t dt are these coefficients for the given temperature t, t0 is the annealing (zero stress, setup) temperature, and αc,g(t) are the time dependent CTEs for the materials in question. In an approximate analysis one could assume that the axial compliance λ of the assembly is due to the glass only, so that λ   1 v g E g h g and therefore the maximum normal stress in the solder glass can be evaluated as σ max  =  E g 1 -  v g ΔαΔt . While the geometric characteristics of the assembly, the change in temperature and the elastic constants of the materials can be determined with high accuracy, this is not the case for the difference in the CTEs of the brittle materials of the glass and the ceramics. In addition, because of the obvious incentive to minimize this difference, such a mismatch is characterized by a small difference of close and appreciable numbers. This contributes to the uncertainty of the problem and makes PPM necessary. Treating the CTEs of the two materials as normally distributed random variables, we evaluate the probability P that the thermal interfacial shearing stress is compressive (negative) and, in addition, does not exceed a certain allowable level [9]. This stress is proportional to the normal stress in the glass layer, which is, in its turn, proportional to the difference Ψ =  α c  -  α g of the CTE of the ceramics and the glass materials. One wants to make sure that the requirement

0Ψ Ψ *  =  σ a E g 1 ν g Δt                 (1)

takes place with a high probability. For normally distributed random variables αc and αg the variable Ψ =  α c α g is also normally distributed with the mean value and standard deviation as ψ =  α c α g and D ψ  =  D c + D g , where α c and α g are the mean values of the materials' CTEs, andDcandDg are their variances. The probability that the above condition for the Ψ value takes place is

P= 0 ψ * f ψ (ψ)dψ =  Φ 1 ( γ * γ)[1 Φ 1 (γ)]                 (2)

where

Φ 1 t  = erft =  1 2π t e t 2 /2 dt                (3)

is the error (Laplace) function, γ =  ψ D ψ is the SF for the CTE difference and γ *  =  ψ * D ψ is the SF for the acceptable level of the allowable stress.

If, e.g., the elastic constants of the solder glass are Eg = 0.66 × 106 kg/cm2 and vg = 0.27, the sealing (fabrication) temperature is 485 ℃ the lowest (testing) temperature is -65 ℃ (so that ∆t = 550 ℃), the computed effective CTE's at this temperature are g = 6.75 × 10-61/℃ and c = 7.20 × 10-61/℃, the standard deviations of these STEs are D c  =  D g  = 0.25 ×  10 6  1/ C o and the (experimentally obtained) ultimate compressive strength for the glass material is σu = 5500 kg/cm2, then, with the acceptable SF of, say, 4, we have σ* = σu/4 = 1375 kg/cm2. The allowable level of the CTE parameter ψ = αc - αg is therefore

ψ *  =  σ a E g 1 v g Δt  =  1375 0.66x 10 6 0.73 550  = 2.765 ×  10 6  1/ C o ,

and its calculated mean value ψ and variance D ψ are ψ =  α c α g  = 0.450 ×  10 6 1/ C o and D ψ  =  D c + D g  = 0.25 ×  10 12 1/ C o . Then the predicted SFs are γ = 1.2726 and γ *  = 7.8201 , and the corresponding probability of non-failure of the seal glass material is

P =  Φ 1 ( γ * γ)[1 Φ 1 (γ)] = 0.898

Note that if the standard deviations of the materials CTEs were only D c  =  D g  = 0.1 ×  10 6  1/ C o , then the SFs γ and γ*and the probability P of non-failure would be significantly higher: γ = 3.1825, γ* = 19.5556 and P = 0.999.

Application of extreme value distribution: An E&P device is operated in temperature cycling conditions. Let us assume that the random amplitude of the induced stress, when a single cycle of the random amplitude is applied, is distributed in accordance with the Rayleigh law, so that the probability density function of the random amplitude of the induced thermal stress is

f r  =  r D x exp r 2 2 D x              (4)

Here Dx is the variance of the distribution. Let us assess the most likely extreme value of the stress amplitude for a large number n of cycles.

The probability distribution density function g(yn) and the probability distribution function G(yn) for the extreme value Yn of the stress amplitude are expressed as follows [28,47]:

g y n  = n f x F x n1 x= y n                    (5)

and

G y n  =  F x n x= ς n             (6)

respectively. Introducing the expression for the function G(yn) into the expression for the function g(yn), the following formula can be obtained for the probability density distribution function g(yn):

g y n  = n ς n 2 exp ς n 2 2 1exp ς n 2 2 n1               (7)

where ς n  =  y n D x is the ratio of the sought amplitude after the loading is applied n times to the standard deviation of the random response in question. The condition g'(yn) = 0 results in the equation:

ς n 2 nexp ς n 2 2 1 exp ς n 2 2 1 =0                   (8)

If the number n is large, the second term in this expression is small compared to the first term and can be omitted. Then we obtain: nexp ς n 2 2 1=0. Hence,

y n = ς n D x = 2 D x lnn                   (9)

As evident from this result, the ratio of the extreme response yn, after n cycles are applied, to the maximum response D x , when a single cycle is applied, is 2lnn . This ratio is 3.2552 for 200 cycles, 3.7169 for 1000 cycles, and 4.1273 for 5000 cycles.

Adequate heat sink: Consider a heat-sink whose steady-state operation is determined by the Arrhenius equation (B-2) [28] (Appendix B). The probability of non-failure can be found using the exponential law of reliability as

P=exp t τ 0 exp U kT .                (10)

Solving this equation for the absolute temperature T, we have:

T= U/k ln τ 0 t lnP              (11)

Addressing, e.g., a failure caused by the surface charge accumulation, for which the ratio of the activation energy to the Boltzmann's constant is U k =11600°K , assuming that the FOAT- predicted time factor τ0 is τ0 = 2 × 10-5 h, that the customer requires that the probability of failure at the end of the service time of t = 40,000 h is, say, Q = 10-5, the obtained formula for the required temperature yields: T = 3523 °K = 79.3 ℃. Thus, the heat sink should be designed accordingly, and the product manufacturer should require that the vendor manufactures and delivers such a heat sink. The situation changes to the worse, if the temperature of the device changes, especially in a random fashion (see previous example 4.3.2), but this situation can also be predicted by a simple probabilistic analysis, which is, however, beyond the scope of this article.

Kinetic Multi-Parametric BAZ Equation as the "Heart" of the Pdfr Concept


"Everyone knows that we live in the era of engineering, however, he rarely realizes that literally all our engineering is based on mathematics and physics"

-Bartel Leendert van der Waerden, Dutch mathematician

Electronic package subjected to the combined action of two stressors

The rationale behind the BAZ equation is described in Appendix B. Let us consider, for the sake of simplicity, the action of just two stressors [49,50]: elevated humidity H and elevated voltage V. If the level I* of the leakage current is accepted as the suitable criterion of material/structural failure, then the equation (B-2) can be written as

P=exp γ I I * texp U 0 γ H H γ V V kT                      (12)

This equation contains four unknowns: The stress-free activation energy U0, the leakage current sensitivity factor γI, the relative humidity sensitivity factor γH and the elevated voltage sensitivity factor γV. These unknowns can be determined experimentally, by conducting a three-step FOAT.

At the first step one should conduct the test for two temperatures, T1 and T2, while keeping the levels of the relative humidity H and the elevated voltage V unchanged. Assuming a certain level I* of the monitored/measured leakage current as the physically meaningful criterion of failure, recording during the FOAT the percentages P1 and P2 of non-failed samples, and using the above equation for the probability of non-failure, we obtain two equations for the probabilities of non-failure:

P 1,2 =exp γ I I * t 1,2 exp U 0 γ H H γ V V k T 1.2            (13)

where t1 and t2 are the testing times and T1 and T2 are the temperatures, at which the failures were observed. Since the numerators in these equations are the same, the following transcendental equation must be fulfilled:

f γ I =ln ln P 1 I * t 1 γ I T 2 T 1 ln ln P 2 I * t 2 γ I =0               (14)

This equation enables determining the leakage current sensitivity factor γI. At the second step, testing at two humidity levels, H1 and H2, should be conducted for the same temperature and voltage. This enables to determine the relative humidity sensitivity factor γH. Similarly, the voltage sensitivity factor γV can be determined, when testing is conducted at the third step at two voltage levels V1 and V2. The stress-free activation energy U0 can be then evaluated from the above expression for the probability P of non-failure for any consistent combination of the relative humidity, voltage, temperature and time as

U 0 = γ H H+ γ V VkTln lnP I * t γ I                 (15)

If, e.g., after t1 = 35 h of accelerated testing at the temperature of T1 = 60 ℃ = 333 K, voltage V = 600 V and the relative humidity of H = 0.85, 10% of specimens reached the critical level I* = 3.5 μA of the leakage current and, hence, failed, then the corresponding probability of non-failure is P1 = 0.9; and if after t2 = 70 h of testing at the temperature T2 = 85 ℃ = 358 K at the same relative humidity and voltage levels, 20% of the tested samples failed, so that the probability of non-failure is P2 = 0.8, then the factor γI can be found from the equation

f γ I =ln 0.10536 γ I 1.075075ln 0.22314 γ I =0

Its solution is γI = 4926 -1 (μA)-1, so that γI I* = 17,241 -1. Tests at the second step are conducted for two relative humidity levels H1 and H2 while keeping the temperature and the voltage unchanged. Then the factor γH can be found as:

γ H = kT H 1 H 2 ln 0.5800x 10 4 ln P 1 t 1 ln 0.5800x 10 4 ln P 2 t 2

If, e.g., 5% of the tested specimens failed after t1 = 40 h of testing at the relative humidity of H1 = 0.5, at the voltage V = 600 V and at the temperature T = 60 ℃ = 333 K ( P1 = 0.95), and 10% of the specimens failed ( P2 = 0.9), after t2 = 55 h of testing at this temperature, but at the relative humidity of H2 = 0.85, then the above expression yields: γH = 0.03292 eV. At the third step, when testing at two voltage levels V1 = 600 V and V2 = 1000 V is carried out for the same temperature-humidity bias at T = 85 ℃ = 358 K and H = 0.85, and 10% of the specimens failed after t1 = 40 h ( P1 = 0.9), and 20% of the specimens failed after t2 = 80 h of testing ( P2 = 0.8), then the factor γV for the applied voltage and the predicted stress-free activation energy U0 are as follows:

γ V = 0.02870 400 ln 0.5800x 10 4 ln P 2 t 2 ln 0.5800x 10 4 ln P 1 t 1 =4.1107x 10 6 eV/V

and

U 0 = γ H H 1 + γ V V 1 k T 1 ln ln P 1 I * t 1 γ I =0.03292x0.5+4.1107x 10 6 x600 8.61733x 10 5 x358ln ln0.9 3.5x35x4893.2 =0.01646+0.00247+0.47984=0.4988eV

No wonder that the third term in this equation plays the dominant role. It is noteworthy, however, that external loading may also have an effect on the "stress-free" activation energy. The author intends to investigate such a possibility as a future work.

The activation energy U0 in the above numerical example (with the rather tentative, but still realistic, input data) is about U0 = 0.5 eV. This result is consistent with the existing reference information. This information (Bell Labs data) indicates that for failure mechanisms typical for semiconductor devices the stress-free activation energy ranges from 0.3 eV to 0.6 eV, for metallization defects and electro-migration in Al it is about 0.5 eV, for charge loss it is on the order of 0.6 eV, for Si junction defects is 0.8 eV. Other known activation energy values used in E&P reliability engineering assessments are more or less on the same order of magnitude. (See also http://nomtbf.com/2012/08/where-does-0-7ev-come-from). With the above information, the following expression for the probability of non-failure can be obtained:

P =exp 172410texp 0.49880.03292H4.1107× 10 6 V 8.61733× 10 5 T

If, e.g., t = 10 h, H = 0.20, V = 220 V, and the operation temperature is T = 70 ℃ = 343 K, then the probability of non-failure at these conditions is

P =  exp[-172410 exp 0.4990 - 0.0066 - 0.0009 0.02956 ] = 0.9897.

Clearly, the TTF is not an independent characteristic of the lifetime of a product, but depends on the predicted or specified probability of its failure. If this probability is high, the lifetime of the product is short, and vice versa, if the probability of non-failure is low, the corresponding lifetime is long.

Predicted lifetime of SJIs: Application of Hall's concept

Using the BAZ model (see Appendix B), the probability of non-failure of the SJI experiencing inelastic strains during temperature cycling [48-53] can be sought as

P=exp γRtexp U 0 nW kT .                   (16)

Here U0 is the activation energy and is the characteristic of the propensity of the solder material to fracture, W is the damage caused in the solder material by a single temperature cycle and measured, in accordance with Hall's concept [50-53], by a hysteresis loop area of a single temperature cycle for the strain of interest, T is the absolute temperature (say, the mean temperature of the cycle), n is the number of cycles, k is Boltzmann's constant, t is time, R, Ω is the measured (monitored) electrical resistance at the joint location, and γ is the sensitivity factor for the measured electrical resistance.

The above equation for the probability of non-failure makes physical sense. Indeed, this probability is "one" at the initial moment of time, when the electrical resistance of the solder joint structure is next-to-zero. This probability decreases with time because of the material aging and structural degradation, and even not necessarily only because of temperature cycling leading to crack initiation and propagation. The probability of non-failure is lower for higher electrical resistance (a resistance as high as, say, 450 , can be viewed as an indication of an irreversible mechanical failure of the joint). Materials with higher activation energy U0 are characterized by higher fracture toughness and have a higher probability of non-failure. The increase in the number n of cycles leads to lower effective energy U = U0 - nW, and so does the energy W of a single cycle (Figure 1).

It could be shown (see Appendix B) that the maximum entropy of the above probability distribution takes place at the MTTF expressed as:

τ= 1 γR exp U 0 nW kT .                   (17)

Mechanical failure, because of temperature cycling, takes place, when the number n of cycles is n f = U 0 W . When failure occurs, the temperature in the denominator in the parenthesis in the equation for the MTTF τ becomes irrelevant. In this case the measured probability of non-failure for the situation, when failure takes place, is

P f =exp t f τ f .              (18)

Here τ f = 1 γ R f is the MTTF. If, e.g., 20 specimens were temperature cycled and the high resistance Rf = 450 Ω considered as an indication of material's failure, was detected in 75 of them, then the probability of non-failure is Pf = 0.25. If the number of cycles during such FOAT was, e.g., nf = 2000, and each cycle lasted, say, for 20 min =1200 sec., then the predicted time-to-failure TTF is tf = 2000 × 1200 = 24 × 105 sec, the sensitivity factor γ for the electrical resistance is

γ= ln P f R f t f = ln0.25 450x24x 10 5 =1.2836x 10 9 Ω 1 sec 1 ;

and the predicted MTTF is

τ f = 1 1.2836x 10 9 x450 sec=480.9hrs=20.0days.

According to Hall's concept [51-54] the energy of a single cycle should be evaluated by running a special test, in which appropriate strain gages should be used. Let, e.g., in these tests the area of the hysteresis loop of a single cycle was W = 2.5 × 10-4 eV. Then the stress-free activation energy of the solder material is U0 = nfW = 2000 × 2.5 × 10-4 = 0.5 eV. In order to assess the number of cycles to failure in actual operation conditions one could assume that the temperature range in these conditions is, say, half the accelerated test range, and that the area W of the hysteresis loop is proportional to the temperature range. Then the number of cycles to failure is

n f = U 0 W = 0.5 2.5x 10 4 =2000.

If the duration of one cycle is one day, then the predicted TTF is tf = 2000 days = 5.48 years.

Accelerated testing based on temperature cycling should be replaced

It is well known that it is the combination of low temperatures and repetitive dynamic loading that accelerate dramatically the propagation of fatigue cracks, whether elastic or inelastic. A modification of the BAZ model is suggested [48,49] for the evaluation of the lifetime of SJIs experiencing inelastic strains. The experimental basis of the approach is FOAT. The test specimens were subjected to the combined action of low temperatures (not to elevated temperatures, as in the classical Arrhenius model) and random vibrations with the given input energy spectrum of the "white noise" type. The methodology suggested and employed in [48,49] is viewed as a possible, effective and attractive alternative to temperature cycling, which is, as is well known, costly, time- and labor- consuming and often even misleading accelerated testing approach. This is because the temperature range in accelerated temperature cycling has to be substantially wider than what the material will most likely encounter in actual use conditions, and properties of E&P materials are, as is known, temperature sensitive.

As long as inelastic deformations take place, it is assumed that it is these deformations (which typically occur at the peripheral portions of the soldered assembly, where the interfacial stresses are the highest) determine the fatigue lifetime of the solder material, and therefore the state of stress in the elastic mid-portion of the assembly does not have to be accounted for. The roles of the size and stiffness of this mid-portion have to be considered, however, when determining the very existence and establishing the size of the inelastic zones at the peripheral portions of the soldered assemblies. Although the detailed numerical example has been carried out for a ball-grid-array (BGA) design, it is applicable also to highly popular today column-grid-array (CGA) and quad-flat-nolead (QFN) designs, as well as to, actually, any packaging design. It is noteworthy in this connection that it is much easier to avoid inelastic strains in CGA and QFN structures than in the actually tested BGA design.

Random vibrations were considered in the developed methodology as a white noise of the given ratio of the acceleration amplitudes squared to the vibration frequency. Testing was carried out for two PCBs, with surface-mounted packages on them, at the same level (with the mean value of 50 g) of three-dimensional random vibrations. One board was subjected to the low temperature of -20 ℃ and another one - to -100 ℃. It has been found, by preliminary calculations that the solder joints at -20 ℃ will still perform within the elastic range, while the solder joints at -100 ℃ will experience inelastic strains. No failures were detected in the joints of the board tested at -20 ℃, while the joints of the board tested at -100 ℃ failed after several hours of testing.

Predicted "static fatigue" lifetime of an optical silica fiber

BAZ equation can be effectively employed as an attractive replacement of the widely used today purely empirical power law relationship for assessing the "static fatigue" (delayed fracture) lifetime of optical silica fibers [41]. The literature dedicated to delayed fracture of ceramic and silica materials, mostly experimental, is enormous. In the analysis below the combined action of tensile loading and an elevated temperature is considered.

Let, e.g., the following input information is obtained at the FOAT first step for a polyimide coated fiber intended for elevated temperature operations: 1) After t1 = 10 h of testing at the temperature of T1 = 300 ℃ = 573 °K, under the stress of σ = 420 kg/mm2, 10% of the tested specimens failed, so that the probability of non-failure is P1 = 0.9; 2) After t2 = 8.0 h of testing at the temperature of T2 = 350 ℃ = 623 °K under the same stress, 25% of the tested samples failed, so that the probability of non-failure is P2 = 0.75. Forming the equation for the probability of non-failure in accordance with the BAZ equation and introducing notations n 1,2 = ln P 1,2 t 1,2 , and θ= T 2 T 1 , the formula

γ t = n 2 θ n 1 1 θ1                (19)

can be obtained for the time sencitivity factor γt. With the above input data we obtain:

n 1 = ln P 1 t 1 = ln0.9 10.0 =0.010536 h 1 n 2 = ln P 2 t 2 = ln0.75 8.0 =0.035960 h 1 .

With the temperature ratio θ= T 2 T 1 = 623 573 =1.08726 the factor γc is

γ c = n 2 θ n 1 1 θ1 = 0.035960 1.08726 0.010536 11.4600 =46307.3146 h 1

At the second step testing has been conducted at the stresses of σ1 = 420 kg/mm2 and σ2 = 320 kg/mm2 at T = 350 ℃ = 623 °K and it has been confirmed that 10% of the tested samples under the stress of σ1 = 420 kg/mm2 failed after t1 = 10.0 h of testing, so that P1 = 0.9. The percentage of failed samples tested at the stress level of σ2 = 320 kg/mm2 was 5% after t2 = 24 h of testing, so that P2 = 0.95. Then the ratio γ σ kT of the sensitivity factor γσ to the thermal energy kT is

γ σ kT = ln n 1 n 2 σ 1 σ 2 = ln 0.010536 0.035960 100 =0.0122761m m 2 /kg.

After the sensitivity factors γc and γσ for the time and for the stress are determined, the expression for the ratio of the stress-free activation energy to the thermal energy can be found from the BAZ formula for the probability of non-failure as

U 0 kT = γ σ kT σln lnP t γ t =0.0122761σln 2.1595x 10 5 lnP t .

If, e.g., the stress σ = σ1 = 320 kg/mm2 is applied for t = 24 h and the acceptable probability of non-failure is, say, P = 0.99 then

U 0 kT = γ σ kT σln lnP t γ t =0.01228x320ln 2.1595x 10 5 ln0.09 24 =3.298+18.521=22.449

This result indicates that the activation energy U0 is determined primarily, as has been expected, by the property of the silica material (second term), but is affected also, to a lesser extent, by the level of the applied stress. The fatigue lifetime, i.e. TTF, can be determined for the acceptable (specified) probability of non-failure as

t= lnP γ t exp U 0 kT γ σ σ kT              (20)

This formula indicates that when the probability of non-failure is low, the expected lifetime (RUL) could be significant. If, e.g., the applied temperature and stress are T = 325 ℃ = 598 K, and 5.0 kg/mm2, and the acceptable (specified) probability of non-failure is P = 0.8, then the predicted TTF is

t= lnP γ t exp U 0 kT γ σ kT σ = ln0.8 46307.3146 exp 22.44960.012276x5.0 =25469.4221h=2.907years

If, however, the acceptable probability of non-failure is considerably higher, say, P = 0.99, then the fiber's lifetime is much shorter, only

t= ln0.99 46307.3146 exp 22.44960.0122761x5.0 =1147.1494h=47.8days.

When P=0.999, the lifetime is

t= ln999 46307.3146 exp(22.44960.0122761x5.0)=121.1416h=5.05days.

BIT of E&P Products: To BIT or Not to BIT, That's the Question


"We see that the theory of probability is at heart only common sense reduced to calculations: it makes us appreciate with exactitude what reasonable minds feel by a sort of instincts, often without being able to account for it."

Pierre-Simon, Marquis de Laplace, French mathematician and astronomer

BIT [54-58] is an accepted practice in E&P manufacturing for detecting and eliminating early failures ("freaks") in newly fabricated electronic products prior to shipping the "healthy" ones that survived BIT to the customer(s). BIT can be based on temperature cycling, elevated temperatures, voltage, current, humidity, random vibrations, etc., and/or, since the principle of superposition does not work in reliability engineering, - on the appropriate combination of these stressors. BIT is a costly undertaking: early failures are avoided and the infant mortality portion (IMP) of the bathtub curve (BTC) is supposedly eliminated at the expense of the reduced yield. But what is even worse, is that the elevated BIT stressors might not only eliminate "freaks," but could cause permanent damage to the main population of the "healthy" products. This kind of testing should be therefore well understood, thoroughly planned and carefully executed. It is unclear, however, whether BIT is always needed ("to BIT or not to BIT: that's the question"), or to what extent the current practices are adequate and effective.

HALT that is currently employed as a BIT vehicle and, as has been indicated above, is a "black box" that tries "to kill many birds with one stone". HALT is unable therefore to provide any trustworthy information on what this testing does. It remains even unclear what is actually happening during, and as a result of, the HALT-based BIT and how to effectively eliminate "freaks," while minimizing the testing time, reducing BIT cost and avoiding damaging the sound devices. When HALT is relied upon to do the BIT job, it is not even easy to determine whether there exists a decreasing failure rate with time. There is, therefore, an obvious incentive to develop ways, in which the BIT process could be better understood, trustworthily quantified, effectively monitored and possibly even optimized.

Accordingly, in this section some important BIT aspects are addressed for a packaged E&P product comprised of numerous mass-produced components. We intend to shed some quantitative light on the BIT process, and, since nothing is perfect (as has been indicated, the difference between a highly reliable process or a product and an insufficiently reliable one is "merely" in the levels of their never-zero probability of failure), such a quantification should be done on the probabilistic basis. Particularly, we intend to come up with a suitable criterion to answer the fundamental "to BIT or not to BIT" question, and, in addition, if BIT is decided upon, - to find a way to quantify its outcome using our physically meaningful and flexible BAZ model.

In the analysis below the role and significance of the following important factors that affect the testing time and the stress level are addressed: the random statistical failure rate (SFR) of mass-produced components that the product of interest is comprised of; the way to assess, from the highly focused and highly cost-effective FOAT, the activation energy of the "freak" population of the manufacturing technology of interest; the role of the applied stressor(s); and, most importantly, the probabilities of the "freak" failures depending on the duration of the BIT loading, and a way to assess, using BAZ equation, these probabilities as functions of the duration and level of the BIT, as well as, as will be shown, the variance of the random SFR of the mass-produced components that the product of interest is comprised of. It is shown that the BTC based time-derivative of the failure rate at the initial moment of time (at the beginning of the IMP portion of the BTC) can be considered as a suitable criterion of whether BIT for a packaged IC device should be or does not have to be conducted. It is shown also that this criterion is, in effect, the variance of the random SFR of the mass-produced components that the manufacturer of the given product received from numerous vendors, whose commitments to the reliability of their mass-produced components are unknown, and therefore the random SFR of these components might vary significantly, from zero to infinity. Based on the developed general formula for the non-random SFR of a product comprised of such components, the solution for the case of normally distributed random SFR of the constituent components has been obtained. This information enables answering the "to BIT or not to BIT" question in electronics manufacturing. If BIT is decided upon, BAZ model can be employed for the assessment of its required duration and level. Our analyses have to do with the role and significance of important factors that affect the testing time and stress level: the random SFR of mass-produced components that the product of interest is comprised of; the way to assess, from the highly focused and highly cost effective FOAT, the activation energy of the "freak" population; the role of the applied stressor(s); and, most importantly, - the probabilities of the "freak" failures depending on the duration of the BIT effort. These factors should be considered when there is an intent to quantify and, eventually, to optimize the BIT's procedure. This fundamental question is addressed using two mutually complementary and independent analyses: 1) The analysis of the configuration of the IMP of a BTC obtained for a more or less well established manufacturing technology of interest; and 2) The analysis of the role of the random SFR of the mass-produced components that the product of interest is comprised of.

The desirable steady-state portion of the BTC commences at the BIT's end as a result of the interaction of two major irreversible time-dependent processes: The "favorable" statistical process that results in a decreasing failure rate with time, and the "unfavorable" physics-of-failure-related process resulting in an increasing failure rate. The first process dominates at the IMP of the BTC and is considered here. The IMP of a typical BTC, the "reliability passport" of a mass-produced electronic product using a more or less well established manufacturing technology, can be approximated as

λ t  =  λ 0 + λ 1 λ 0 1 t t 1 n 1 ,0t t 1                     (21)

Here λ0 is BTC's steady-state ordinate, λ1 is its initial (highest) value at the beginning of the IMP, t1 is the IMP duration, the exponent n1 is n 1  =  β 1 1 β 1 , and β1 is the fullness of the BTC's IMP. This fullness is defined as the ratio of the area below the BTC to the area (λ0 - λ1) t1 of the corresponding rectangular. The exponent n1 changes from zero to one, when the β1 changes from zero to 0.5. The time derivative of the failure rate at the IMP's initial moment of time (t = 0) is

λ 0  =  λ 1 λ 0 t 1 β 1 1 β 1               (22)

If this derivative is zero or next-to-zero, this means that the IMP of the BTC is parallel to the time axis (so that there is, in effect, no IMP at all), that no BIT is needed to eliminate this portion, and "not to burn-in" is the answer to the basic question: the initial value λ1 of the BTC is not different from its steady-state λ0 value. What is less obvious is that the same result takes place for β 1 t 1  = 0 . This means that although the BIT is needed, the testing could be short and low level, because there are not too many "freaks" in the manufactured population and because, although these "freaks" exist, they are characterized by very low probabilities of non-failure, so that the planned BIT process could be a next-to-an-instantaneous one. The maximum value of the fullness β1 is β1 = 0. This corresponds to the case when the IMP of the BTC is a straight line connecting the initial, λ1, and the steady-state, λ0, BTC ordinates. The derivative λ'(0) is

λ 0  =  dλ t dt  =  λ 1 λ 0 t 1             (23)

In this case, and this seems to be the case, when the BIT is mostly needed. It has been found that the expression for the non-random time dependent SFR

λ ST t  =  0 λexp λt f λ dλ 0 exp λt f λ dλ                (24)

Can be obtained from the probability density distribution function f(t) for the random SFR λ for the components obtained from the vendors. When this rate is normally distributed,

f λ  =  1 2πD exp λ λ ¯ 2 2D                (25)

i.e., the above formula yields:

λ ST t  =  2D φ τ t                (26)

The "time function" φ[τ(t)] depends on the dimensionless "physical" (effective) time τ = t D 2 s , where s =  λ ¯ 2D value, known in the probabilistic reliability theory as safety factor, can be interpreted as a measure of the degree of uncertainty of the random SFR. The time derivative with respect to the actual (real) time λ'ST(t) is

λ ST t  =  2D dφ τ t dt  =  2D dφ dτ dτ dt  = D φ τ             (27)

It can be shown that the derivative φ'(τ) at the initial moment of time (t = 0) is equal to -1.0, so that λ ST 0  =  λ 1  = D . This result explains the physical meaning of this derivative: it is the variance (with a "minus" sign, of course) of the random SFR of the constituent components.

As to the use of the kinetic BAZ model in the problem in question, it suggests a simple, easy-to-use, highly flexible and physically meaningful way to evaluate of the probability of failure of a material or a device after the given time in testing or operation at the given temperature and under the given stress or stressors. Using this model, the probability of non-failure during the BIT can be sought as

P = exp γ t D I * texp U 0 γ σ σ k T 1,2                (28)

Here D is the variance of the random SFR of the mass-produced components, I is the measured/monitored signal (such as, e.g., leakage current, whose agreed-upon high value I* is considered as an indication of failure; or an elevated electrical resistance, particularly suitable for solder joint interconnections), t is time, σ is the "external" stressor, U0 is the activation energy (unlike in the original BAZ model, this energy may or may not be affected by the level of the external stressor), T is the absolute temperature, γσ is the stress sensitivity factor for the applied stress and γt is the time/variance sensitivity factor. The above distribution makes physical sense. Indeed, the probability P of non-failure decreases with an increase in the variance D, in the time t, in the level I* of the leakage current at failure and in the temperature T, and increases with an increase in the activation energy U0. As has been shown, the maxima of the entropy and the probability of non-failure take place at the moment of time

t =  1 γ t D I * exp U 0 γ σ σ kT                (29)

Accepted in the BAZ model as the MTTF. There are three unknowns in this expression: the product ρ = γtD of the time related stress-sensitivity factor γσ and the variance D, and the activation energy U0. These unknowns, as has been demonstrated in previous examples, could be determined from a two-step FOAT. At the first step testing should be carried out for two temperatures, T1 and T2, but for the same effective activation energy U = U0 - γσσ. Then the relationships

P 1,2  = exp ρ I * t 1,2 exp U 0 γ σ σ k T 1,2                 (30)

For the measured probabilities of non-failure can be obtained. Here t1 ,2 are the corresponding times at which the failures have been detected and I* is the agreed upon the leakage current at failure. Since the numerator U = U0 - γσσ in these relationships is kept the same in the conducted tests, the amount ρ = γtD can be found as

ρ = exp 1 θ1 n 2 θ n 1               (31)

Where the notations n 1,2  = - ln P 1,2 I * t 1,2   and θ =  T 2 T 1 are used. The second step of testing is aimed at the evaluation of the stress sensitivity factor γσ and should be conducted at two stress levels, σ1 and σ2 (say, temperatures or voltages). If the stresses σ1 and σ2 are thermal stresses determined for the temperatures T1 and T2, they could be evaluated using a suitable stress model. Then

γ σ  = k T 1 ln n 1 T 2 ln n 2 + T 2 T 1 lnρ σ 1 σ 2                (32)

If, however, the external stress is not a thermal stress, then the temperatures at the second step tests should preferably be kept the same. Then the ρ value will not affect the factor γσ, which could be found as

γ σ  =  kT σ 1 σ 2 ln n 1 n 2              (33)

Where T is the testing temperature. Finally, after the product ρ and the factor γσ are determined, the activation energy U0 can be determined as

U 0  = -k T 1 ln n 1 ρ +γ σ 1  = -k T 2 ln n 2 ρ +γ σ 2             (34)

The TTF can be obviously determined as TTF = MTTF(-lnP), where the MTTF has been defined above.

Let, e.g., the following data were obtained at the first step of FOAT: 1) After t1 = 14 h of testing at the temperature of T1 = 60 ℃ = 333° K, 90% of the tested devices reached the critical level of the leakage current of I* = 3.5 μA and, hence, failed, so that the recorded probability of non-failure is P1 = 0.1; the applied stress is elevated voltage σ1 = 380 V; 2) After t2 = 28 h of testing at the temperature of T2 = 85 ℃ = 358° K, 95% of the samples failed, so that the recorded probability of non-failure is P2 = 0.05. The applied stress is still elevated voltage σ1 = 380 V. Then the parameters

n 1,2  = - ln P 1,2 I * t 1,2     are    n 1  =  ln P 1 I * t 1  =  ln0.1 3.5×14  = 4.6991 × 10 2 μ A 1 h 1

and

n 2  =  ln P 2 I * t 2  =  ln0.05 3.5×28  = 3.0569× 10 2 μ A 1 h 1

With the temperature ratio θ =  T 2 T 1  =  358 333  = 1.0751 , we have:

ρ = exp 1 θ1 n 2 θ n 1  = exp 1 0.0751 0.030569 1.075 0.046991  = 785.3197μ A 1 h 1

At the second step of FOAT one can use, without conducting additional testing, the above information from the first step, its duration and outcome, and let the second step of testing has shown that after t2 = 36 h of testing at the same temperature of T = 60 ℃ = 333° K, 98% of the tested samples failed, so that the predicted probability of non-failure is P2 = 0.02. If the stress σ2 is the elevated voltage σ2 = 220 V, then the parameter n2 becomes

n 2  = - ln P 2 I * t 2  =  ln0.02 3.5×36  = 3.1048 × 10 2 μ A 1 h 1

and the sensitivity factor γσ for the applied stress is

γ σ  = -kT ln n 1 n 2 σ 1 σ 2  = 8.61733× 10 5 ×333 ln 4.6991× 10 2 3.1048× 10 2 380220  = 4326× 10 5 eV× V 1

The zero-stress activation energies calculated for the above parameters n1 and n2 and the stresses σ1 and σ2 is

U 0  = -kTln n 1 ρ + γ σ σ 1  = -8.61733× 10 5 ×333ln 43699× 10 2 785.3197 +4326× 10 5 ×380 = 0.2790 + 0.0282 = 0.3072eV

To make sure that there was no calculation error, the zero-stress activation energy can be found also as

U 0  = -kTln n 2 ρ + γ σ σ 2  = -8.61733× 10 5 ×333ln 3.1048× 10 2 785.3197 +4326× 10 5 ×220 = 0.2909 + 0.0164 = 0.3072eV

No wonder that these values are considerably lower than the activation energies of "healthy" products. Many manufacturers consider as a sort of "rule of thumb" that the level of 0.7eV can be used as an appropriate tentative number for the activation energy of healthy electronic products. In this connection it should be indicated that when the BIT process is monitored and the activation energy U0 is being continuously calculated based on the number of the failed devices, the BIT process should be terminated, when the calculations, based on the observed and recorded FOAT data, indicate that the stress-free activation energy U0 starts to increase. The MTTF can be computed as

t = MTTF =  1 ρ I * exp U 0 γ σ σ kT  =  1 785.3197×3.5 exp 0.30727.4326× 10 5 8.61733× 10 5 ×333  = 16.1835h

The TTF, however, depends on the probability of non-failure. Its values calculated as TTF = MTTF × (lnP) are shown in Table 2.

Clearly, the probabilities of non-failure for successful BITs should be low enough. It is clear also that the BIT process should be terminated when the calculated probabilities of non-failure and the activation energy U0 start rapidly increasing. Although our BIT analyses do not suggest any straightforward and complete way of how to optimize BIT, they nonetheless shed useful and insightful light on the significance of some important factors that affect the BIT's need, and, if decided upon, - its required time and stress level for a packaged product comprised of mass-produced components.

Adequate Trust is an Important HCF Constituent


"If a man will begin with certainties he will end with doubts; but if he will be content to begin with doubts, he shall end in certainties".

Francis Bacon, English philosopher and statesman, ‘The Advancement of Learning'

Since Shakespearian "love all, trust a few" and "don't trust the person who has broken faith once" and to the today's ladygaga's "trust is like a mirror, you can fix it if it's broken, but you can still see the crack in that mother f*cker's reflection", the importance of human-human trust was addressed by numerous writers, politicians and psychologists in connection with the role of the human factor in making a particular engineering undertaking successful and safe [59-66]. It was the 19th century South Dakota politician and clergyman Frank Craine who seems to be the first who indicated the importance of an adequate trust in human relationships. Here are a couple of his quotes: "You may be deceived if you trust too much, but you will live in torment unless you trust enough"; "We're never so vulnerable than when we trust someone - but paradoxically, if we cannot trust, neither can we find love or joy"; "Great companies that build an enduring brand have an emotional relationship with customers that has no barrier. And that emotional relationship is on the most important characteristic, which is trust". Hoff and Bashir [61] considered the role of trust in automation. Madhavan and Wiegmann [62] drew attention at the importance of trust in engineering and, particularly, at similarities and differences between human-human and human-automation trust. Rosenfeld and Kraus [63] addressed human decision making and its consequences, with consideration of the role of trust. Chatzi, Wayne, Bates and Murray [64] provided a comprehensive review of trust considerations in aviation maintenance practice. The analysis in this section [65] is, in a way, an extension and a generalization of the recent Kaindl and Svetinovic [66] publication, and addresses some important aspects of the human-in-the-loop (HITL) problem for safety-critical missions and extraordinary situations, as well as in engineering technologies. It is argued that the role and significance of trust can and should be quantified when preparing such missions. The author is convinced that otherwise the concept of an adequate trust simply cannot be effectively addressed and included into an engineering technology, design methodology or a human activity, when there is a need to assure a successful and safe outcome of a particular engineering undertaking or an aerospace or a military mission. Since nobody and nothing is perfect, and the probability-of-failure is never zero, such a quantification should be done on the probabilistic basis. Adequate trust is an important human quality and a critical constituent of the human capacity factor (HCF) [67-70]. When evaluating the outcome of a HITL related mission or an off-normal situation, the role of the HCF should always be considered and even quantified vs. the level of the mental workload (MWL). While the notion of the MWL is well established in aerospace and other areas of human psychology and is reasonably well understood and investigated (see, e.g., [71-89]), the importance of the HCF has been emphasized by the author of this paper and introduced only several years ago. The rationale behind such an introduction is that it is not the absolute MWL level, but the relative levels of the MWL and HCF that determine, in addition to other critical factors, the probability of the human non-failure in a particular off-normal situation of interest. The majority of pilots with an ordinary HCF would most likely have failed in the "miracle-on-the-Hudson" situation, while "Sully", with his extraordinarily high anticipated HCF, has not.

HCF includes, but might not be limited to, the following human qualities that enable a professional to successfully cope, when necessary, with an elevated off-normal MWL: Age, fitness, health; personality type; psychological suitability for a particular task; professional experience , qualifications, and intelligence; education, both special and general; relevant capabilities and skills; level, quality and timeliness of training; performance sustainability (consistency, predictability); independent thinking and independent acting, when necessary; ability to concentrate; ability to anticipate; ability to withstand fatigue in general and, when driving a car, drowsiness (this ability might be considerably different depending on whether it is "old fashioned" manual or automated driving (AD) [90]; self control and ability to "act in cold blood" in hazardous and even life threatening situations; mature (realistic) thinking; ability to operate effectively under time pressure; ability to operate effectively, when necessary, in a tireless fashion, for a long period of time (tolerance to stress); ability to make well substantiated decisions in a short period of time; team-player attitude, when necessary; ability and willingness to follow orders, when necessary; swiftness in reaction, when necessary; adequate trust; and ability to maintain the optimal level of physiological arousal. These and other qualities are certainly of different importance in different HITL situations.

HCF could be time-dependent.

It is clear that different individuals possess these qualities in different degrees. Captain Chesley Sullenberger ("Sully"), the hero of the famous miracle-on-the-Hudson event did indeed possess an outstanding HCF. As a matter of fact the "miracle" was not that he managed to ditch the aircraft successfully in an extraordinary situation, but that an individual like Captain Sullenberger, and not someone like a pilot with a regular HCF, turned out behind the wheel in such a situation. As far as the quality of an adequate trust is concerned, Captain Sullenberger certainly "avoided over-trust" in the ability of the first officer, who ran the aircraft when it took off La Guardia airport, to successfully cope with the situation, when the aircraft struck a flock of Canada Geese and lost engine power. Captain Sullenberger took over the controls, while the first officer began going through the emergency procedures checklist in an attempt to find information on how to restart the engines and what to do, with the help of the air traffic controllers at LaGuardia and Teterboro airports, to bring the aircraft to these airports and hopefully to land it there safely. What is even more important, is that Captain Sullenberger also effectively and successfully "avoided under-trust" in his own skills, abilities and extensive experience that would enable him to successfully cope with the situation: 57-year-old Captain Sully was a former fighter pilot, a safety expert, a professional development instructor and a glider pilot. That was the rare case when "team work" (such as, say, sharing his "wisdom" and intent with flight controllers at LaGuardia and Teterboro) was not the right thing to pursue until the very moment of ditching. Captain Sully had trust in the aircraft structure that would be able to successfully withstand the slam of the water during ditching and, in addition, would enable slow enough flooding after ditching. It turned out that the crew did not activate the "ditch switch" during the incident, but Capt. Sullenberger later noted that it probably would not have been effective anyway, since the water impact tore holes in the plane's fuselage that were much larger than the openings sealed by the switch. Captain Sully had trust in the aircraft safety equipment that was carried in excess of that mandated for the flight. He also had trust in the outstanding cooperation and excellent cockpit resource management among the flight crew who trusted their captain and exhibited outstanding team work (that is where such work was needed, was useful and successful) during landing and the rescue operation. The area where the aircraft landed was the one, where fast response from and effective help of the various ferry operators located near the USS Intrepid ship/museum, and the ability of the rescue team to provide timely and effective help was the one that Capt. "Sully" could expect and rely upon, and he actually did. The environmental conditions and, particularly, the visibility was excellent and was an important contributing factor to the survivability of the accident. All these trust related factors played an important role in Captain Sullenberger's ability to successfully ditch the aircraft and save lives. As is known, the crew was later awarded the Master's Medal of the Guild of Air Pilots and Air Navigators for successful "emergency ditching and evacuation, with the loss of no lives… a heroic and unique aviation achievement…the most successful ditching in aviation history. "National Transportation Safety Board (NTSB) Member Kitty Higgins, the principal spokesperson for the on-scene investigation, said at a press conference the day after the accident that it" has to go down [as] the most successful ditching in aviation history… These people knew what they were supposed to do and they did it and as a result, nobody lost their life". The flight crew, and, first of all, Captain Sullenberger, were widely praised for their actions during the incident, notably by New York City Mayor (Michael Bloomberg at that time) and New York State Governor David Paterson, who opined, "We had a Miracle on 34th Street. I believe now we have had a Miracle on the Hudson." Outgoing U.S. President George W. Bush said he was "inspired by the skill and heroism of the flight crew", and he also praised the emergency responders and volunteers. Then President-elect Barack Obama said that everyone was proud of Sullenberger's "heroic and graceful job in landing the damaged aircraft", and thanked the A320's crew.

The double-exponential probability density function (DEPDF) [70] for the random HCF has been revisited in the addressed adequate trust problem with an intent to show that the entropy of this distribution, when applied to the trustee, can be viewed as an appropriate quantitative characteristic of the propensity of a human to make a decision influenced by an under-trust or an over-trust. DEPDF's entropy for the human non-failure sheds quantitative light on why under-trust and over-trust should be avoided. A suitable modification of the DEPDF for the human non-failure, whether it is the performer (decision maker) or the trustee, could be assumed in the following simple form

P = exp γtexp F G                 (35)

Where P is the probability of non-failure, t is time, F is the HCF, G is the MWL, and γ is the sensitivity factor for the time.

The expression for the probability of non-failure P makes physical sense. Indeed, the probability P of human non-failure, when fulfilling a certain task, decreases with an increase in time and increases with an increase in the ratio of the HCF to the mental workload (MWL). At the initial moment of time (t = 0) the probability of non-failure is P = 1 and exponentially decreases with time, especially for low F/G ratios. For very large HCF-to-the-MWL ratios the probability P of non-failure is also significant, even for not-very short operation times. The above expression, depending on a particular task and application, could be applied either to the performer (the decision maker) or to the trustee. The trustee could be a human, a technology, a concept, an existing best practice, etc.

The ergonomics underlying the above distribution could be seen from the time derivative dP dt  =  H P t , where H(P) = -PlnP is the entropy of this distribution. The formula for the time derivative of the probability of non-failure indicates that the above DEPDF reflects an assumption that the time derivative of the probability of non-failure is proportional to the entropy of this distribution and decreases with an increase in time. As to the expression for the DEPDF, it sheds useful quantitative light on the Ref. [67] recommendation that both under-trust and over-trust should be avoided. The entropy H(P), when applied to the above distribution and viewed in this case as the probability of non-failure of the trustee's performance, is zero for both extreme values of this performance: When the probability of the trustee's non-failure is zero, it should be interpreted as an extreme under-trust in someone else's authority or expertise ("not invented here (NIH)" syndrome, which is typical for big organizations or corporations); when the probability of the trustee's non-failure is one, that means that there is an extreme over-trust in the trustees technology and/or leadership abilities: "my neighbor's grass is always greener" and "no man is a prophet in his own land". This is, as is known, typical for small companies or organizations.

The role of the human factor (HF) in various, mostly aerospace, missions and situations, was addressed in numerous publications (see, e.g., [68-89]). When PPM analyses are conducted with an intent to assess the probability of non-failure, considering the role of the HCF vs. his/her MWL, a suitable model is DEPDF based one. This model is similar to the BAZ model, which also leads to a double-exponential relationship, but does not contain temperature as an important parameter affecting the TTF. Like in the BAZ model, the necessary parameters of the DEPDF model can be obtained for the given HCF and MWL from the appropriately designed and conducted FOAT.

Let us show how this could be done, using as an example, the role of the HF in aviation. Flight simulator could be employed as an appropriate FOAT vehicle to quantify, on the probabilistic basis, the required level of the HCF with respect to the expected MWL when fulfilling a particular mission. When designing and conducting FOAT aimed at the evaluation of the sensitivity parameter γ in the distribution for the probability of non-failure, a certain MWL factor I (electro-cardiac activity, respiration, skin-based measures, blood pressure, ocular measurements, brain measures, etc.) should be monitored and measured on the continuous basis until its agreed-upon high value I*, viewed as an indication of a human failure, is reached. Then the above DEPDF distribution for the probability of non-failure could be written as

P = exp γt I * exp F G                 (36)

Bringing together a group of more or less equally and highly qualified individuals, one should proceed from the fact that the HCF is a characteristic that remains more or less unchanged for these individuals during the relatively short time of the FOAT. The MWL, on the other hand, is a short-term characteristic that can be tailored, in many ways, depending on the anticipated MWL conditions. From the above expression we have:

Gln n γ  = F = Const                  (37)

Where n = - lnP I * t . Let the FOAT is conducted at two MWL levels, G1 and G2, and the criterion I* was observed and recorded at the times of t1 and t2 for the established (observed, recorded) percentages of Q1 = 1 - P1 and Q2 = 1 - P2 , respectively. Then the condition for the HCF F that should remain unchanged enables to obtain the following formula for the sensitivity factor γ:

γ = exp ln n 2 G 1 G 2 ln n 1 1 G 1 G 2                    (38)

The HCF of the individuals that underwent the accelerated testing can be determined as:

F =  G 1 ln n 1 γ  =  G 2 ln n 2 γ              (39)

Let, e.g., the same group of individuals was tested at two different MWL levels, G1 and G2, until failure (whatever its definition and nature might be), and let the MWL ratio was G 2 G 1  = 2 . Because of that the TTF was considerably shorter and the number of the failed individuals was considerably larger, for the same I* level (say, I* = 120) in the second round of tests. Let, e.g., the probabilities of non-failure and the corresponding times are P1 = 0.8, P2 = 0.5, t1 = 2.0 h and t2 = 1.5 h. Then the ratios n1,2 are

n 1  =  ln P 1 t 1 I *  = - ln0.8 2×120  = 9.2976× 10 4 ,  n 2  =  ln P 2 t 2 I *  = - ln0.8 1.5×120  = 38.5082× 10 4

and the following values for the sensitivity factor and the required HCF-to-MWL ratio can be obtained:

γ = exp ln n 2 G 1 G 2 ln n 1 1 G 1 G 2  = exp ln38.5082× 10 4 0.5ln9.2976× 10 4 10.5  = 0.015948

F G 1  = -ln n 1 γ  = ln 9.2976× 10 4 0.015948  = 2.8422,  F G 2  = -ln n 2 γ  = ln 38.5082 ×  10 4 0.015948  = 1.4210

The calculated required HCF-to-MWL ratios

F G =ln 62.7038 lnP t

for different probabilities of non-failure and for different times are shown in Table 3.

As evident from the calculated data, the level of the HCF in this example should exceed considerably the level of the MWL, so that a high enough value of the probability of human-non-failure is achieved, especially for long operation times. It is concluded that trust is an important HCF quality and should be included into the list of such qualities for a particular "human-in-the-loop" task. The HCF should be evaluated vs. MWL, when there is a need to assure a successful and safe outcome of a particular aerospace or military mission, or when considering the role of a HF in a non-vehicular engineering system. The DEPDF for the random HCF is revisited, and it is shown particularly that its entropy can be viewed as an appropriate quantitative characteristic of the propensity of a human to an under-trust or an over-trust judgment and, as the consequence of that, to an erroneous decision making or to a performance error.

PPM of an Emergency-Stopping Situation in AD or on a RR


"Education is man's going forward from cocksure ignorance to thoughtful uncertainty."

Kenneth G. Johnson, American high-school English teacher

Automotive engineering is entering a new frontier-the AD era [91-98]. Level 3 of driving automation, conditional automation, as defined by SAE [96], considers a vehicle controlled autonomously by the system, but only under ‘specific conditions'. These conditions include speed control, steering, and braking, as well as monitoring the environment. When/if, however, such conditions are no more met, and monitoring the environment determines unexpected or uncontrollable situation, the system is supposed to hand over control to the human operator. The new AD frontier requires, on one hand, the development of advanced navigation equipment and instrumentation, and, first of all, an effective and reliable AD system itself, but also numerous cameras, radars, LiDARs ("optical radars") and other electro-optic means with fast and effective processing capabilities. In addition, special qualifications and attitudes are required of the key HITL "component" of the system -the driver. It is he/she who is ultimately responsible for the vehicle and passengers safety, and should effectively interact with the system on a permanent basis. It is imperative that the driver of an AD vehicle receives special training before operating such vehicle, and this requirement should be reflected in his/hers driver license.

While one has to admit that at present "we do not even know what we do not know" [91] about the challenges and pitfalls associated with the use of AD systems, we do know, however, that the HITL role will hardly change in the foreseeable future, when more advanced AD equipment will be developed and installed. What is also clear is that the safe outcome of an off-normal AD related situation could not be assured, if it is not quantified, and that, because of various inevitable unpredictable intervening uncertainties, such quantification should be done on the probabilistic basis. In effect, the difference between a highly reliable and an insufficiently reliable performance of a system or a human is "merely" the difference in the never-zero probabilities of their failure. Accordingly, PAM is employed in this analysis to predict the likelihood of a possible collision, when the system and/or the driver (the significance of this important distinction has still to be determined and decided upon [98]) suddenly detect a steadfast obstacle, and when the only way to avoid collision is to decelerate the vehicle using brakes. We would like to emphasize that PPM should always be considered to complement computer simulations in various HITL and AD related problems. These two modeling approaches are usually based on different assumptions and use different evaluation techniques, and if the results obtained using these two different approaches are in a reasonably good agreement, then there is a reason to believe that the obtained data are sufficiently accurate and trustworthy.

It has been demonstrated, mostly in application to the aerospace domain, how PPM could be effectively employed, when the reliability of the equipment (instrumentation), both its hard- and software, and human performance contribute jointly to the outcome of a vehicular mission or an extraordinary situation. One of the developed models, the convolution model, is brought here "down to earth", i.e., extended, with appropriate modifications, for the AD situation, when there is a need to avoid collision. The automotive vehicle environment might be much less forgiving than for an aircraft: While slight deviations in aircraft altitude, speed, or human actions are often tolerable without immediate consequences, a motor vehicle is likely to have much tighter control requirements for avoiding collision than an aircraft. We would like to point out also that the driver of an AD vehicle should possess special "professional" qualities associated with his/her need to interact with an AD system. These qualities should be much higher and more specific than the today's amateur driver possesses.

The pre-deceleration time (that includes decision-making time, pre-braking time and to some extent also brake-adjusting time) and the corresponding distance (σ0) characterize, in the extraordinary situation in question, when compared to the deceleration time and distance (σ1), the role of the HCF. Indeed, if this factor is large (the driver reacts fast and effectively), the ratio η =  σ 1 σ 0 is significant. It is also noteworthy that the successful outcome of an extraordinary AD related situation depends also on the level of trust of the human driver towards the system and the system's user-friendly and failure-free performance. Adequate trust should be viewed therefore an important HCF in making AD sufficiently safe. The more or less detailed evaluation of the role of the drivers trust towards the AD system performance is, however, beyond the scope of this analysis and is considered as future work. We would like to indicate also that the overall distance of the trip and the driver's fatigue and state-of-health might have a significant effect on his/her alertness. This circumstance should also be considered and possibly quantified. This effort is also considered, however, as future work.

When a deterministic approach is used to quantify the role of the major factors affecting the safety of an outcome of a possible collision situation, when an obstacle is suddenly detected in front of the moving vehicle, the role of the HF could be quantified by the ratio HF =  s 1 s 0 + s 1  =  s 1 s , where S0 is the pre-deceleration distan, S1 is the deceleration distance, and S = S0 + S1 is the stopping distance. The factor HF changes from one to zero, when the distance S0 that characterizes the human performance changes from zero (exceptionally high performance) to a large number (low performance). As has been indicated, special training might be necessary to make the human performance adequate for a particular AD system and vehicle type, and the relevant information should be even included into the driver's driver license.

Pre-deceleration time that is characterized by the constant speed of the vehicle includes: 1) Decision-making time, i.e., the time that the system and/or the driver need to decide that/if the driver has to intervene and to take over the control of the vehicle; 2) Pre-braking time that the driver needs to make his/her decision on pushing the brakes and, 3) Brake-adjusting time needed to adjust the brakes, when interacting with the vehicle's anti-lock (anti-skid) braking system; although both the human and the vehicle performance affect this third period of time and the corresponding distance, it can be conservatively assumed that the brake-adjusting time is simply part of the pre-deceleration time. Thus, two major critical periods could be distinguished in an approximate PPM of a possible collision situation:

1) The pre-deceleration time counted from the moment of time, when the steadfast obstacle was detected, until the time when the vehicle starts to decelerate. This time depends on driver experience, age, fatigue and other relevant items of his/her HCF. It could be assumed that during this time the vehicle keeps moving with its initial speed V0 and that it is this time that characterizes the performance of the driver. If, e.g., the vehicle's initial speed is V0 = 10 m/s nd the pre-deceleration time is T0 = 3.0 s, then the corresponding distance is as follows: S0 = V0T0 = 30 m; and 2) The deceleration time that can be evaluated as T 1  =  2 S 1 V 0  =  V 0 a . In this formula, obtained assuming constant deceleration a, S1 is the stopping distance during the deceleration time (deceleration distance). If e.g., a = 4.0 m/s2 (it is this acceleration that characterizes the vehicle's ability to effectively decelerate), and the initial velocity is V0 = 10 m/s, then the deceleration time is T 1  =  V 0 a  = 2.5s , and S 1  =  V 0 T 1 2  = 25m is the corresponding distance.

The total stopping distance is therefore S = S0 + S1 = 55 m, so that the contributions of the two main constituents of this distance are comparable in this example. Note that, as it follows from the formula S =  V 0 T 0 + T 1 2 for the total stopping distance, the pre-deceleration time T0 affected by the human performance might be even more critical than the deceleration time T1 affected by the decelerating vehicle and its breaking system. Both the vehicle's and its braking system's performance affect this time. The total stopping time is simply proportional to the initial velocity that should be low enough to avoid an accident and allow the driver to make his/her brake-no-brake decision and push the brakes in a timely fashion. The human factor is HF =  s 1 s 0 + s 1  = 0.4545 in this example. If the actual distance S is smaller than the ASD Ŝ determined by the radar or LiNDAR then collision could possibly be avoided. In the above example, the ASD should not be smaller than, say, Ŝ = 56 m to avoid collision. The PAM, based on the Rayleigh distribution for the operational time and distance (see next section), indicates, however, that for low enough probabilities of collision the ASD should be considerably larger than that (see Table 4 data).

In reality none of the above times and the corresponding distances are known, or could be, or even will ever be evaluated, with sufficient certainty, and there is an obvious incentive therefore that a probabilistic approach is employed to assess the likelihood of an accident. To some extent, our predictive model is similar to the convolution model applied in the helicopter-landing-ship situation [85], where, however, random times, and not random distances, were considered. If the probability P S S ^ that the random sum S = S0 + S1 of the two random distances S0 and S1 is larger than the anticipated sight distance (ASD) Ŝ to the obstacle determined by the system for the moment of time when the obstacle was detected, is sufficiently low, then there is a good chance and a good reason to believe that collision will be avoided.

It is natural to assume that the random times T0 and T1, corresponding to the distances S0 and S1, are distributed in accordance with the Rayleigh law. Indeed, both these times cannot be zero, but cannot be very long either. In addition, in an emergency situation, short time values are more likely than long time values, and because of that, their probability density distribution functions should be heavily skewed in the direction of short times. The Rayleigh distribution in possesses these physically important properties and is accepted in our analysis. The probability PS that the sum s = s0 + s1 of the random variables S0 and S1 exceeds a certain level Ŝ is expressed by the distribution (A-1) in the Appendix A, and the computed probabilities PS of collision are shown in Table 4. The calculated data indicate particularly that the probability of collision for the input data used in the above deterministic example, where the pre-deceleration distance was σ0 = S0 = 30 m, the deceleration distance was σ1 = S1 = 25 m, and the dimensionless parameters were η =  σ 1 σ 0  = 0.8333 and s =  s ^ 2 σ 0 2 + σ 1 2  = 0.9959 , is as high as 0.6320.

As evident from Table 4, the probability of collision will be considerably lower for larger available distances Ŝ. The calculated data clearly indicate that the available distance plays the major role in avoiding collision, while the HF is less important. It is noteworthy in this connection that the Rayleigh distribution is an extremely conservative one. Data that are less conservative and, perhaps, more realistic could be obtained by using, say, Weibull distribution for the random times and distances.

Note that the decrease in the probabilities of collision (which is, in our approach, the probability PS that the available distance Ŝ to the obstacle is exceeded) for high η =  σ 1 σ 0 ratios (i.e. in the case of an exceptionally good human performance that is reflected by the very short most likely pre-deceleration distance σ0) should be attributed to the way the dimensionless parameters s =  s ^ 2 σ 0 2 + σ 1 2 and η =  σ 1 σ 0 were selected, and does not necessarily reflect the actual role of the most likely pre-deceleration distance σ0. For η ≥ 1 the probability of collision naturally decreases with an increase in the η ratio, and rapidly decreases with an increase in the s value.

The Table 4 data are based on the convolution equation

P s  = 1- 0 S s 0 σ 0 2 exp s 0 2 2 σ 0 2 1exp S s 0 2 2 σ 1 2 d s 0  =  e 1+ η 2 s 2 + e s 2 1 1+1/ η 2 e s 2 /η e η 2 s 2 + π s η+1/η Φ ηs +Φ s/η

for the probability Ps of collision. The PDFs

f s 0,1  =  s 0,1 σ 0,1 0 exp s 0,1 2 2 σ 0,1 2                 (41)

are the PDFs of the random variables S0 and S1, σ0,1 are the modes (most likely values) of these variables,

s 0,1  =  π 2 σ 0,1   and   D 0,1  =  4π 2 σ 0,1            (42)

are their means and standard deviations, respectively,

s =  S 2 σ 0 2 + σ 1 2     and   η =  σ 1 σ 0                 (43)

are the dimensionless parameters of the convolution of the two PDFs f(s0,1), and

Φ α  =  2 π 0 α e t 2 dt

is the Laplace function (probability integral).

The computed data in Table 4 indicate that the ASD and the deceleration ratio η have a significant effect on the probability PS of collision. This is particularly true for the ASD. Assuming that the level of PS on the order of PS = 10-4 might be acceptable, the ratio η of the "useful" breaking distance σ1 to the "useless", but inevitable, pre-braking distance σ0 should be significant, higher than, say, 3, to assure a low enough probability PS of collision. The following conclusions could be drawn from the carried out analysis:

1) Probabilistic analytical modeling provides an effective means to support simulations, which will eventually help in the reduction of road casualties; is able to improve dramatically the state-of-the-art in understanding and accounting for the human performance in various vehicular missions and off-normal situations, and in particular in the pressing issue of analyzing human-vehicle handshake, i.e. the role of human performance when taking over vehicle control from the automated system; and enables quantifying, on the probabilistic basis, the likelihood of collision in an automatically driven vehicle for the situation when an immovable obstacle is suddenly detected in front of the moving vehicle;

2) The computed data indicate that it is the ASD that is, for the given initial speed, the major factor in keeping the probability of collision sufficiently low;

3) Future work should include implementation of the suggested methodology, considering that the likelihood of an accident, although never zero, could and should be predicted, adjusted to a particular vehicle, autopilot, driver and environment, and be made low enough; should consider, also on the probabilistic basis, the role of the variability of the available sight distance;

4) This work should include also considerable effort, both theoretical (analytical and computer-aided) and experimental/empirical, as well as statistical, in similar modeling problems associated with the upcoming and highly challenging automated driving era;

5) Future work should include training a system to convolute numerically a larger number of physically meaningful non-normal distributions. The developed formalism could be used also for the case, when an obstruction is unexpectedly determined in front of a railroad (RR) train [99-114].

Quantifying the Effect of Astronaut's/Pilot's/Driver's/Machinist's SoH on His/Hers Performance


"There is nothing more practical than a good theory"

Kurt Zadek Lewin, German-American psychologist

The subject of this section can be defined as probabilistic ergonomics science, probabilistic HF engineering, or a probabilistic human-systems technology. The paper is geared to the HITL related situations, when human performance and equipment reliability contribute jointly to the outcome of a mission or an extraordinary situation. While considerable improvements in various aerospace missions and off-normal situations can be achieved through better traditional ergonomics, better health control and work environment, and other well established non-mathematical human psychology means that affect directly the individual's behavior, health and performance, there is also a significant potential for improving safety in the air and in the outer space by quantifying the role of the HF, and human-equipment interaction by using PPM and PRA methods and approaches.

While the mental workload (MWL) level is always important and should be always considered when addressing and evaluating an outcome of a mission or a situation, the human capacity factor (HCF) is usually equally important: the same MWL can result in a completely different outcome depending on the HCF level of the individual(s) involved; in other words, it is the relative levels of the MWL and HCF that have to be considered and quantified in one way or another, when assessing the likelihood of a mission or a situation success and safety. MWL and HCF can be characterized by different means and different measures, but it is clear that both these factors have to have the same units in a particular problem of interest;

It should be emphasized that one important and favorable consequence of an effort based on the consideration of the MWL and HCF roles is bridging the existing gap between what the aerospace psychologists and system analysts do. Based on the author's quite a few interactions with aerospace system analysts and avionic human psychologists, these two categories of specialists seldom team up and actively collaborate. Application of the PPM/PRA concept provides therefore a natural and an effective means for quantifying the expected HITL related outcome of a mission or a situation and for minimizing the likelihood of a mishap, casualty or a failure. By employing quantifiable and measurable ways of assessing the role and significance of various uncertainties and by treating HITL related missions and situations as part, often the most crucial part, of the complex man-instrumentation-equipment-vehicle-environment system, one could improve dramatically the human performance and the state-of-the-art in assuring aerospace missions success and safety.

Various aspects of SoH and HE characteristics are intended to be addressed in the author's future work as important items of an outer space medicine. The recently suggested three-step-concept methodology is intended to be employed in such an effort. The considered PPM/PRA approach is based on the application of the DEPDF. It is assumed that the mean time to failure (MTTF) of a human performing his/her duties is an adequate criterion of his/her failure/error-free performance: In the case of an error-free performance this time is infinitely long, and is very short in an opposite case. The suggested expression for the DEPDF considers that both high MTTF and high HCF result in a higher probability of a non-failure, but enables to separate the MTTF as the direct HF characteristic from other HCF features, such as, e.g., level of training, ability to operate under time pressure, mature thinking, etc. etc.

It should be emphasized that the DEPDFs, considered in this and in the previous author's publications, are different of the classical (Laplace, Gumbel) double-exponential distributions and are not the same for different HITL-related problems of interest. The DEPDF could be introduced, as has been shown in the author's previous publications, in many different ways depending on the particular risk-analysis field, mission or a situation, as well as on the sought information. The DEPDF suggested in this analysis considers the following major factors: Flight duration, the acceptable level of the continuously monitored (measured) human state-of-health (SoH) characteristic (symptom), the MTTF as an appropriate HE characteristic, the level of the mental workload (MWL) and the human capacity factor (HCF). It is noteworthy that while the notion of the MWL is well established in aerospace and other areas of human psychology and is reasonably well understood and investigated, the notion of the HCF was introduced by the author of this analysis only several years ago. The rationale behind that notion is that it is not the absolute MWL level, but the relative levels of the MWL and HCF that determine, in addition to other critical factors, the probability of the human failure and the likelihood of a mishap.

It has been shown that the DEPDF has its physical roots in the entropy of this function. It has been shown also how the DEPDF could be established from the highly focused and highly cost effective FOAT data. FOAT is a must, if understanding the physics of failure of instrumentation and/or of human performance is imperative to assure high likelihood of a failure-free aerospace operation. The FOAT data could be obtained by testing on a flight simulator, by analyzing the responses to post-flight questionnaires or by using Delphi technique. FOAT could not be conducted, of course, in application to humans and their health, but testing and state-of-health monitoring could be run until a certain level (threshold) of the human SH characteristic (symptom), still harmless to his/her health, is reached.

The general concepts addressed in our analysis are illustrated by practical numerical examples. It is demonstrated how the probability of a successful outcome of the anticipated aerospace mission can be assessed in advance, prior to the fulfillment of the actual operation. Although the input data in these examples are more or less hypothetical, they are nonetheless realistic. These examples should be viewed therefore as useful illustrations of how the suggested DEPDF model can be implemented. It is the author's belief that the developed methodologies, with appropriate modifications and extensions, when necessary, can be effectively used to quantify, on the probabilistic basis, the roles of various critical uncertainties affecting success and safety of an aerospace mission or a situation of importance. The author believes also that these methodologies and formalisms can be used in many other cases, well beyond the aerospace domain, when a human encounters an uncertain environment or an hazardous off-normal situation, and when there is an incentive/need to quantify his/her qualifications and performance, and/or when there is a need to assess and possibly improve the human role in a particular HITL mission or a situation, and/or when there is an intent to include this role into an analysis of interest, with consideration of the navigator's SoH. Such an incentive always exists for astronauts in their long outer space journeys, or for long maritime travels, but could be also of importance for long enough aircraft flights, when, e.g., one of the two pilots gets incapacitated during the flight.

The analysis carried out here is, in effect, an extension of the above effort and is focused on the application of the DEPDF in those HITL related problems in aerospace engineering that are aimed at the quantification, on the probabilistic basis, of the role of the HF, when both the human performance and, particularly, his/her SoH affect the outcome of an aerospace mission or a situation. While the PPM of the reliability of the navigation instrumentation (equipment), both hard- and software, could be carried out using well known Weibull distribution, or on the basis of the BAZ equation, or other suitable and more or less well established means, the role of the human factor, when quantification of the human role is critical, could be considered by using the suggested DEPDF. There might be other ways to go, but this is, in the author's view and experience, a quite natural and a rather effective way.

The DEPDF is of the extreme-value-distribution type, i.e. places an emphasis on the inputs of extreme loading conditions that occur in extraordinary (off-normal) situations, and disregards the contribution of low level loadings (stressors). Our DEPDF is of a probabilistic a-priori type, rather than a statistical a-posteriori type approach, and could be introduced in many ways depending on the particular mission or a situation, as well as on the sought information. It is noteworthy that our DEPDF is not a special case, nor a generalization, of Gumbel, or any other well-known statistical EVD used for many decades in various applications of the statistics of extremes, such as, e.g., prediction of the likelihood of extreme earthquakes or floods. Our DEPDF should be rather viewed as a practically useful engineering or HF related relationship that makes physical and logical sense in many practical problems and situations, and could and should be employed when there is a need to quantify the probability of the outcome of a HITL- related aerospace mission. The DEPDF suggested in this analysis considers the following major factors: Flight/operation duration; the acceptable level of the continuously monitored (measured) meaningful human SH characteristic (FOAT approach is not acceptable in this case); the MWL level; the MTTF as an appropriate HE characteristic; and the HCF.

The DEPDF could be introduced, as has been indicated, in many ways, and its particular formulation depends on the problem addressed. In this analysis we suggest a DEPDF that enables one to evaluate the impact of three major factors, the MWL G, the HCF F, and the time t (possibly affecting the navigator's performance and sometimes even his/her health), on the probability Ph(F,G,t) of his/her non-failure. With an objective to quantify the likelihood of the human non-failure, the corresponding probability could be sought in the form of the following DEPDF:

P h F,G, S *  =  P 0 exp 1 γ S S * t G 2 G 0 2 exp 1 γ T T * F 2 F 0 2               (44)

Here P0 is the probability of the human non-failure at the initial moment of time (t = 0) and at a normal (low) level of the MWL (G = G0 ), S* is the threshold (acceptable level) of the continuously monitored/measured (and possibly cumulative, effective, indicative, even multi-parametric) human health characteristic (symptom), such as, e.g., body temperature, arterial blood pressure, oxyhemometric determination of the level of saturation of blood hemoglobin with oxygen, electrocardiogram measurements, pulse frequency and fullness, frequency of respiration, measurement of skin resistance that reflects skin covering with sweat, etc. (since the time t and the threshold S* enter the expression (1) As a product S*t, each of these parameters has a similar effect on the sought probability (1)); γS is the sensitivity factor for the symptom S*; G ≥ G0 is the actual (elevated, off-normal, extraordinary) MWL that could be time dependent; G0 is the MWL in ordinary (normal) operation conditions; T* is the mean time to error/failure (MTTF); γT is the sensitivity factor for the MTTF T*; F ≥ F0 is the actual (off-normal) HCF exhibited or required in an extraordinary condition of importance; F0 is the most likely (normal, specified, ordinary) HCF. It is clear that there is a certain overlap between the levels of the HCF F and the T* value, which has also to do with the human quality. The difference is that the T* value is a short-term characteristic of the human performance that might be affected, first of all, by his/her personality, while the HCF is a long-term characteristic of the human, such as his education, age, experience, ability to think and act independently, etc. The author believes that the MTTF T* might be determined for the given individual during testing on a flight simulator, while the factor F, although should be also quantified, cannot be typically evaluated experimentally, using accelerated testing on a flight simulator. While the P0 value is defined as the probability of non-failure at a very low level of the MWL G, it could be determined and evaluated also as the probability-of-non-failure for a hypothetical situation when the HCF F is extraordinarily high, i.e., for an individual/pilot/navigator who is exceptionally highly qualified, while the MWL G is still finite, and so is the operational time t. Note that the above function Ph(F,G, S*) has a nice symmetric-and-consistent form. It reflects, in effect, the roles of the MWL + SoH "objective", "external", impact E =  1 γ S S * t G 2 G 0 2 , and of the HCF + HE "subjective", "internal", impact I = 1 γ T T * F 2 F 0 2 . The rationale below the structures of these expressions is that the level of the MWL could be affected by the human's SH (the same person might experience a higher MWL, which is not only different for different humans, but might be quite different depending on the navigator's SH), while the HCF, although could also be affected by the state of his/her health (SH), has its direct measure in the likelihood that he/she makes an error. In our approach this circumstance is considered by the T* value, mean time to error (MTTF), since an error is, in effect, the failure to an error-free performance. When the human's qualification is high, the likelihood of an error is lower. The "external" E = MWL + SoH factor is more or less a short term characteristic of the human performance, while the factor I = HCF + HE is a more permanent, more long term characteristic of the HCF and its role. It is noteworthy that the links between the human's mind (MWL) and his/her body (SoH) are closely linked and that such links are far from being well defined and straightforward. The suggested formalism to consider this circumstance is just a possible way to account for such a link. Difficulties may arise in some particular occasions when the MWL and the SH factors overlap. It is anticipated therefore that the MWL impact in the suggested formalism considers, to an extent possible, various more or less important impacts other than the SoH related one.

Measuring the MWL has become a key method of improving aviation safety, and there is an extensive published work devoted to the measurement of the MWL in aviation, both military and commercial. Pilot's MWL can be measured using subjective ratings and/or objective measures. The subjective ratings during FOAT (simulation tests) can be, e.g., after the expected failure is defined, in the form of periodic inputs to some kind of data collection device that prompts the pilot to enter, e.g., a number between 1 and 10 to estimate the MWL every few minutes. There are also some objective MWL measures, such as, e.g., heart rate variability. Another possible approach uses post-flight questionnaire data: it is usually easier to measure the MWL on a flight simulator than in actual flight conditions. In a real aircraft, one would probably be restricted to using post-flight subjective (questionnaire) measurements, since a human psychologist would not want to interfere with the pilot's work. Given the multidimensional nature of MWL, no single measurement technique can be expected to account for all the important aspects of it. In modern military aircraft, complexity of information, combined with time stress, creates significant difficulties for the pilot under combat conditions, and the first step to mitigate this problem is to measure and manage the MWL. Current research efforts in measuring MWL use psycho-physiological techniques, such as electroencephalographic, cardiac, ocular, and respiration measures in an attempt to identify and predict MWL levels. Measurement of cardiac activity has been also a useful physiological technique employed in the assessment of MWL, both from tonic variations in heart rate and after treatment of the cardiac signal. Such an effort belongs to the fields of astronautic medicine and aerospace human psychology. Various aspects of the MWL, including modeling, and situation awareness analysis and measurements, were addressed by numerous investigators.

HCF, unlike MWL, is a new notion. HCF plays with respect to the MWL approximately the same role as strength/capacity plays with respect to stress/demand in structural analysis and in some economics problems. HCF includes, but might not be limited to, the following major qualities that would enable a professional human to successfully cope with an elevated off-normal MWL: Age; fitness; health; personality type; psychological suitability for a particular task; professional experience and qualifications; education, both special and general; relevant capabilities and skills; level, quality and timeliness of training; performance sustainability (consistency, predictability); independent thinking and independent acting, when necessary; ability to concentrate; awareness and ability to anticipate; ability to withstand fatigue; self-control and ability to act in cold blood in hazardous and even life threatening situations; mature (realistic) thinking; ability to operate effectively under pressure, and particularly under time pressure; leadership ability; ability to operate effectively, when necessary, in a tireless fashion, for a long period of time (tolerance to stress); ability to act effectively under time pressure and make well substantiated decisions in a short period of time and in an uncertain environmental conditions; team-player attitude, when necessary; swiftness in reaction, when necessary; adequate trust (in humans, technologies, equipment); ability to maintain the optimal level of physiological arousal. These and other qualities are certainly of different importance in different HITL situations. It is clear also that different individuals possess these qualities in different degrees. Long-term HCF could be time-dependent.

To come up with suitable figures-of-merit (FoM) for the HCF, one could rank, similarly to the MWL estimates, the above and perhaps other qualities on the scale from, say, one to ten, and calculate the average FoM for each individual and particular task. Clearly, MWL and HCF should use the same measurement units, which could be particularly non-dimensional. Special psychological tests might be necessary to develop and conduct to establish the level of these qualities for the individuals of significance. The importance of considering the relative levels of the MWL and the HCF in human-in-the-loop problems has been addressed and discussed in several earlier publications of the author and is beyond the scope of this analysis.

The employed DEPDF makes physical sense. Indeed, 1) When time t, and/or the level S* of the governing SH symptom, and/or the level of the MWL G are significant, the probability of non-failure is always low, no matter how high the level of the HCF F might be; 2) When the level of the HCF F and/or the MTTF T* are significant, and the time t, and/or the level S* of the governing SoH symptom, and/or the level of the MWL G are finite, the probability Ph(F,G, S*) of the human non-failure becomes close to the probability P0 of the human non-failure at the initial moment of time (t = 0) and at a normal (low) level of the MWL (G = G0 ); 3) when the HCF F is on the ordinary level F0 then

P h F,G, S *  =  P h G, S *  =  P 0 exp 1 γ S S * t G 2 G 0 2 exp γ T T *              (45)

For a long time in operation (t →∞) and/or when the level S* of the governing SH symptom is significant (S* →∞) and/or when the level G of the MWL is high, the probability of non-failure will always be low, provided that the MTTF T* is finite; 4) at the initial moment of time (t = 0) and/or for the very low level of the SH symptom S* (S* = 0) the formula yields:

P h F,G, T *  =  P h G  =  P 0 exp 1 G 2 G 0 2 exp 1 γ T T * F 2 F 0 2                (46)

When the MWL G is high, the probability of non-failure is low, provided that the MTTF T* and the HCF F are finite. However, when the HCF is extraordinarily high and/or the MTTF T* is significant (low likelihood that HE will take place), the above probability of non-failure will close to one. In connection with the taken approach it is noteworthy also that not every model needs prior experimental validation. In the author's view, the structure of the suggested models does not. Just the opposite seems to be true: this model should be used as the basis of FOAT oriented accelerated experiments to establish the MWL, HCF, and the levels of HE (through the corresponding MTTF) and his/her SoH at normal operation conditions and for a navigator with regular skills and of ordinary capacity. These experiments could be run, e.g., on different flight simulators and on the basis of specially developed testing methodologies. Being a probabilistic, not a statistical model, the equation (1) should be used to obtain, interpret and to accumulate relevant statistical information. Starting with collecting statistics first seems to be a time consuming and highly expensive path to nowhere.

Assuming, for the sake of simplicity, that the probability P0 is established and differentiating the expression

P ¯  =  P h F,G, S * P 0  = exp 1 γ S S * t G 2 G 0 2 exp 1 γ T T * F 2 F 0 2              (47)

With respect to the time t the following formula can be obtained:

d P ¯ dt  = -H P ¯ 1 γ S S * G 2 G 0 2 1 γ S S * t G 2 G 0 2                    (48)

Where H P  =  P ln P is the entropy of the distribution P ¯  =  p h F,G, S * P 0 . When the MWL G is on its normal level G0 and/or when the still accepted SH level S* is extraordinarily high, the above formula yields: d P ¯ dt  =  H P ¯ t . Hence, the basic distribution for the probability of non-failure is a generalization of the situation, when the decrease in the probability of human performance non-failure with time can be evaluated as the ratio of the entropy H P of the above distribution to the elapsed time t, provided that the MWL is on its normal level and/or the HCF of the navigator is exceptionally high. At the initial moment of time (t = 0) and/or when the governing symptom has not yet manifested itself (S* = 0) we obtain:

P ¯  = exp 1 G 2 G 0 2 exp 1 γ T T * F 2 F 0 2                (49)

Then we find,

d P ¯ dG  = 2H P ¯ G 2 G 0 2 1 G 2 G 0 2                   (50)

For significant MWL levels this formula yields: d P ¯ dG  = 2H P ¯ . Thus, another way to interpret the underlying physics of the accepted distribution is to view this distribution as such that considers that the change in the probability of non-failure at the initial moment of time with the change in the level of the MWL and when this level is significant, is twice as high as the entropy of the distribution (4). The entropy H P is zero for the probabilities P  = 0 and P  = 1 , and reaches its maximum value H max  =  1 e  = 0.3679 for P = 1 e =0.3679. Hence, the derivative d P ¯ dG is zero for the probabilities P  = 0 and P  = 1 , and its maximum value d P ¯ dG max  =  2 eG takes place for P ¯  =  1 e  = 0.3679 . The P ¯ values calculated for the case T* = 0 (human error is likely, but could be rapidly corrected because of the high HCF of the performer) indicate that: 1) At normal MWL level and/or at an extraordinarily (exceptionally) high HCF level the probability of human non-failure is close to 100%; 2) If the MWL is exceptionally high, the human will definitely fail, no matter how high his/her HCF is; 3) If the HCF is high, even a significant MWL has a small effect on the probability of non-failure, unless this MWL is exceptionally large (indeed, highly qualified individuals are able to cope better with various off-normal situations and get tired less when time progresses than individuals of ordinary capacity); 4) The probability of non-failure decreases with an increase in the MWL (especially for relatively low MWL levels) and increases with an increase in the HCF (especially for relatively low HCF levels); 5) For high HCFs the increase in the MWL level has a much smaller effect on the probabilities of non-failure than for low HCFs; it is noteworthy that the above intuitively more or less obvious judgments can be effectively quantified by using analyses based on Eqs. (1) and (4); 6) The increases in the HCF (F / F0 ratio) and in the MWL (G/ G0 ratio) above the 3.0 has a minor effect on the probability of non-failure; this means particularly that the navigator does not have to be trained for an extraordinarily high MWL and/or possess an exceptionally high HCF (F / F0 ratio), higher than 3.0, compared to a navigator of an ordinary capacity (qualification); in other words, a navigator does not have to be a superman or a superwoman to successfully cope with a high level MWL, but still has to be trained to be able to cope with a MWL by a factor of three higher than the normal level. If the requirements for a particular level of safety are above the HCF for a well educated and well trained human, then the development and employment of the advanced equipment and instrumentation should be considered for a particular task, and the decision about the right way to go should be based on the evaluation, also, preferably, on the probabilistic basis, of both the human and the equipment performance, costs, time-to-completion ("market") and the possible consequences of failure.

In the basic DEPDF (1) there are three unknowns: the probability P0 and two sensitivity factors γS and γT . As has been mentioned above, the probability P0 could be determined by testing the responses of a group of exceptionally highly qualified individuals, such as, e.g., Captain Sullenberger in the famous Miracle on the Hudson event. Let us show how the sensitivity factors γS and γT can be determined. The Eq. (4) can be written as

ln P ¯ 1 γ S S * t G 2 G 0 2  = exp 1 γ T T * F 2 F 0 2               (51)

Let FOAT be conducted on a flight simulator for the same group of individuals, characterized by the more or less the same high MTTF T* values and high HCF F F 0 ratios, at two different elevated (off-normal) MWL conditions, G1 and G2. Let the governing symptom, whatever it is, has reached its critical pre-established level S* at the times t1 and t2 , respectively, from the beginning of testing, and the corresponding percentages of the individuals that failed the tests were Q1 and Q2, so that the corresponding probabilities of non-failure were P 1 and P 2 , respectively. Since the same group of individuals was tested, the right part of the above equation that reflects the levels of the HCF and HE remains more or less unchanged, and therefore the requirement

ln P ¯ 1 1 γ S S * t 1 G 1 2 G 0 2  =  ln P ¯ 2 1 γ S S * t 2 G 2 2 G 0 2               (52)

Should be fulfilled, This equation yields:

γ s  =  1 S * 1 G 1 2 G 0 2 ln P ¯ 1 ln P ¯ 2 1 G 2 2 G 0 2 t 1 ln P ¯ 1 ln P ¯ 2 t 2              (53)

After the sensitivity factor γS for the assumed symptom level S* is determined, the dimensionless variable γT T*, associated with the human error sensitivity factor γT , could be evaluated. The equation (10) can be written in this case as follows:

γ T T = 1- F 2 F 0 2 ln ln P ¯ γ S S * t+ G 2 G 0 2 1               (54)

For normal values of the HCF F 2 F 0 2  = 1 and high values of the MWL G 2 G 0 2 1 this equation yields:

γ T T  -ln ln P ¯ γ S S * t+ G 2 G 0 2                    (55)

The product γT T* should be always positive and therefore the condition γ s S * t+ G 2 G 0 2 ln P ¯ should always be fulfilled. This means that the testing time of a meaningful FOAT on a flight simulator should exceed, for the taken G 2 G 0 2 level, should be above the threshold

t * = ln P ¯ + G 2 G 0 2 γ s S *                   (56)

When the probability P ¯ changes from P ¯  = 1 to P ¯  = 0 , the t* value changes from t * = G 2 / G 0 2 γ s S * to infinity.

Let FOAT has been conducted on a flight simulator or by using another suitable testing equipment for a group of individuals characterized by high HCF F F 0 level at two loading conditions, G 1 G 0  = 1.5 and G 2 G 0  = 2.5 The tests have indicated that the critical value of the governing symptom (such as, e.g., body temperature, arterial blood pressure, oxyhemometric determination of the level of saturation of blood hemoglobin with oxygen, etc.) of the critical magnitude of, say, S* = 180, has been detected during the first set of testing (under the loading condition of G 1 G 0  = 1.5 ) after t1 = 2.0 h 0f testing in 70% of individuals (so that P ¯ 1  = 0.3 ), and during the second set of testing (under the loading condition of G 2 G 0  = 2.5 ) after t2 = 4.0 h of testing in 90% of individuals (so that P ¯ 2  = 0.1 ). With these input data the above formula for the sensitivity factor γs yields:

γ s  =  1 S * 1 G 1 2 G 0 2 ln P ¯ 1 ln P ¯ 2 1 G 2 2 G 0 2 ln P ¯ 1 ln P ¯ 2 t 2 t 1  =  1 180 12.25 1.2040 2.3026 5.25 1.2040 2.3026 42  = 0.09073 h 1

Then the probability P ¯ is

P ¯  =  P h F,G, S * P 0  = exp 1 γ s S * t G 2 G 0 2 exp 1 F 2 F 0 2  = exp 11.814t G 2 G 0 2 exp 1 F 2 F 0 2           (57)

These results indicate particularly the importance of the HCF and that even a relatively insignificant increase in the HCF above the ordinary level can lead to an appreciable increase in the probability of human non-failure. Clearly, training and individual qualities are always important.

Let us assess now the sensitivity factor γT of the human error measured as his/her time to failure (to make an error). Let us check first if the condition for the testing time is fulfilled, i.e., if the testing time is long enough to exceed the required threshold. With G 1 G 0  = 1.5 and P ¯ 1  = 0.3 , and with γsS* = 0.09073 × 180 = 16.3314, the time threshold is

t * = ln P ¯ + G 2 G 0 2 γ s S * = ln0.3+2.25 16.3314 = 1.2+2.25 16.3314 =0.06405h

The actual testing time was 2.0 h, i.e., much longer. With G 2 G 0 =2.5 and P ¯ 2 =0.1 , and with γsS* = 16.3314, we obtain the following value for the time threshold:

t * = ln P ¯ + G 2 G 0 2 γ s S * = ln0.1+6.25 16.3314 = 2.3026+6.25 16.3314 =0.24171h

The actual testing time was 4.0 hours, i.e., much longer. Thus, the threshold requirement is met in both sets of tests. Then we obtain:

γ T T   *  -ln ln P ¯ γ S S * t+ G 2 G 0 2  = ln ln0.3 16.3314×2.0+2.25  = 3.3672

for the first set of testing. For the second one we obtain:

γ T T  -ln ln P ¯ γ S S * t+ G 2 G 0 2  = ln ln0.1 16.3314×4.0+6.25  = 3.4367

The results are rather close, so that in an approximate analysis one could accept γT T* ≈ 3.4. After the sensitivity factors for the HE and SH aspects of the HF are determined, the computations of the probabilities of non-failure for any levels of the MWL and HCF can be made.

The following conclusions can be drawn from the carried out analyses. The suggested DEPDF for the human non-failure can be applied in various HITL related aerospace problems, when human qualification and performance, as well as his/her state of health are crucial, and therefore the ability to quantify them is imperative, and since nothing and nobody is perfect, these evaluations could and should be done on the probabilistic basis. The MTTF is suggested as a suitable characteristic of the likelihood of a human error: if no error occurs in a long time, this time is significant; in the opposite situation - it is very short. MWL, HCF, time and the acceptable levels of the human health characteristic and his/her propensity to make an error are important parameters that determine the level of the probability of non-failure of a human in when conducting a flight mission or in an extraordinary situation, and it is these parameters that are considered in the suggested DEPDF. The MWL, the HCF levels, the acceptable cumulative human health characteristic and the characteristic of his/her propensity to make an error should be established depending on the particular mission or a situation, and the acceptable/adequate safety level - on the basis of the FOAT data obtained using flight simulation equipment and instrumentation, as well as other suitable and trustworthy sources of information, including, perhaps, also the well known and widely used Delphi technique (method). The suggested DEPDF based model can be used in many other fields of engineering and applied science as well, including various fields of human psychology, when there is a need to quantify the role of the human factor in a HITL situation. The author does not claim, of course, that all the i's are dotted and all the t's are crossed by the suggested approach. Plenty of additional work should be done to "reduce to practice" the findings of this paper, as well as those suggested in the author's previous HITL related publications.

Survivability of Species in Different Habitats


"There were sharks before there were dinosaurs, and the reason sharks are still in the ocean is that nothing is better at being a shark than a shark."

Eliot Peper, American writer

Survivability of species in different habitats is important, particularly, in connection with travel to and exploring the habitat conditions in the outer space. The BAZ equation enables to consider the effects of as many stressors as necessary, such as, say, radiation, hygrometry, oxygen rate, pressure, etc. It should be emphasized that all the stressors of interest are applied simultaneously/concurrently, and this will take care of their possible coupling, nonlinear effects, etc. The physically meaningful and highly flexible kinetic BAZ approach just helps to bridge the gap between what one "sees" as a result of the appropriate FOAT and what he/she will supposedly "get" in the actual "field"/"habitat" conditions. Let, e.g., the challenge of adaptation to a space flight and new planetary environments is addressed, a particular species is tested until "death" (whatever the indication on it might be), and the role of temperature T and gravity G are considered in the astro-biological experiment of importance. This experiment corresponds to the FOAT (testing to failure) in electronic and photonic reliability engineering. Then the double-exponential BAZ equation can be written for the application in question as follows:

P = exp γ c C * texp U 0 γ G G kT                 (58)

Here the C* value is an objective quantitative evidence/indication that the particular organism or a group of organisms died, U0 is the stress-free activation energy that characterizes the health or the typical longevity of the given species, and "gammas" are sensitivity factors. The above equation contains three unknowns: The stress-free activation energy U0 and two sensitivity factors. These unknowns could be determined from the available observed data or from specially designed, carefully conducted and thoroughly analyzed FOAT. At the first step testing at two constant temperature levels, T1 and T2, are conducted, while the gravity stressor G and the effective energy level U 0  =  γ G G = kTln n γ C remain the same in both sets of tests. The notation n =  lnP C * t is introduced here. Since the left part of the above equation is kept the same in both sets of tests, this equation results in the following formula for the γC factor:

γ c  = exp 1 θ1 ln n 2 θ n 1                (59)

Where n 1,2  =  ln P 1,2 C * t 1,2 are the probability parameters and θ =  T 2 T 1 is the temperature ratio. The second step of testing should be conducted at two different G levels. Since the stress-free activation energy should be the same in both sets of tests, the factor γG could be found as γ G  =  kT G 1 G 2 ln n 1 n 2 , where the n1 and n2 values should be determined using the above formula, but are, of course, different from these values obtained as a result of the first step of testing. Note that the γG factor is independent of the factor γC. The stress-free activation energy can be found as

U 0  =  γ G G 1,2 k T 1,2 ln n 1,2 γ C                   (60)

The result should be, of course, the same whether the index "1" or "2" is used. It is noteworthy that the suggested approach is expected to be more accurate for low temperature conditions, below the melting temperature for ice, which is 0 ℃ = 273 K. It has been established, at least for microbes, that the survival probabilities below and above this temperature are completely different. It is possible that the absolute temperature in the denominator of the original BAZ equation and multi-parametric equations should be replaced, in order to account for the non-linear effect of the absolute temperature, by, say, Tm value, where the exponent m is different of one. Let, e.g., the criterion of the death of the tested species is, say, C* = 100 (whatever the units), and testing is conducted until half of the population dies, so that P1 ,2 = 0.5. This happened after the first step of testing was conducted for t1 = 50 h. After the temperature level was increased by a factor of 4, so that θ =  T 2 T 1  = 4.0 , it was observed that half of the tested population failed after t2 = 20 h of testing. Then the computed n1,2 values are

n 1  =  ln P 1 C * t 1  =  ln0.5 100×50  = 1.3863× 10 4 h 1 ,  n 2  =  ln P 2 C * t 2  =  ln0.5 100×20  = 3.4657× 10 4 h 1

and the sensitivity factor γC is

γ c  = exp 1 θ1 ln n 2 θ n 1  = exp 1 41 ln 3.4657× 10 4 4 1.3863× 10 4  = 4.7037× 10 4 h 1

The second step of testing was conducted at the temperature level of T = 20 K, and half of the tested population failed after t1 = 100 h, when testing was conducted at the gravity level of G1 = 10 m/s2, and after t2 = 30 h, when the gravity level was twice as high. The thermal energy kT = 8.6173303 × 10-5 × 20 = 17.2347 × 10-4 eV was the same in both cases. Then the n1,2 values are

n 1  =  ln P 1 C * t 1  =  ln0.5 100×100  = 0.6931× 10 4 h 1 ,  n 2  =  ln P 2 C * t 2  =  ln0.5 100×30  = 2.3105× 10 4 h 1

and the factor γG is

γ G  =  kT G 1 G 2 ln n 1 n 2  =  17.2347× 10 4 1020  =  17.2347× 10 4 1020 ln 0.6931× 10 4 2.3105× 10 4  = 2.0751× 10 4 eV s 2 m 1

The stress-free activation energy (this energy characterizes the biology of a particular species) is as follows:

U 0  =  γ G G 1 kTln n 1 γ c  = 2.0751× 10 4 ×1017.2347× 10 4 ln 0.6931× 10 4 4.7037× 10 4  = 2.0751× 10 3 +3.3003× 10 3  = 5.3272× 10 3 ev

or, to make sure that there was no numerical error, could be evaluated also as

U 0  =  γ G G 2 k T 1,2 ln n 2 γ c  = 2.051× 10 4 ×2017.2347× 10 4 ln 2.3105× 10 4 4.7037× 10 4  = 4.1020× 10 3 +1.2252x = 5.3272x10ev

This energy characterizes the nature of a particular species from the viewpoint of its survivability in the outer space under the given temperature and gravity condition and for the given duration of time. Clearly, in a more detailed analysis the role of other environments factors, such as, say, vacuum, temperature variations and extremes, radiation, etc., can also be considered. From the formula (8) we obtain the following expression for the lifetime (time to failure/death) for G = 10:

t = - lnP γ c C * exp U 0 γ G G kT  =  lnP 4.7037× 10 4 ×100 exp 5.3272× 10 3 2.51× 10 4 ×10 17.2347× 10 4  = -142.2738lnP

This relationship is tabulated in the following Table 5.

In the case of G = 0, we have: t = 467.6846lnP. This relationship is tabulated in the Table 6.

Lower gravity resulted in this example in a considerably longer lifetime. It is noteworthy that at the microbiological level the effect of gravitational forces might be considerably less significant than, say, electromagnetic or radiation influences. As a matter of fact, BAZ method has been recently employed in application to electron devices subjected to radiation [45], and the approach is certainly applicable to the biological problem addressed in this paper. Different types of radiation are well known "life killers".

Conclusion


"There are things in this world, far more important than the most splendid discoveries—it is the methods by which they were made."

Gottfried Leibnitz, German mathematician

The outcome of a research or an engineering undertaking of importance must be quantified to assure its success and safety. The reviewed publications could be used to follow a suitable modeling method. Boltzmann-Arrhenius-Zhurkov equation might be one of them. Analytical modeling should always be considered in addition to computer simulations in any significant engineering endeavor.

References


  1. Suhir E (2012) Likelihood of vehicular mission success-and-safety. J Aircraft 49.
  2. Suhir E (2018) Aerospace mission outcome: Predictive modeling. editorial, special issue challenges in reliability analysis of aerospace electronics, Aerospace 5.
  3. Baron S, Kruser DS, Huey B (1990) Quantitative modelinbg of human performance in complex dynamic systems. National Academy Press, Washington, DC.
  4. Card SK, Moran TP, Newell A (1983) The psychology of human-computer interaction. Lawrence Erlbaum Associates, Hillside, NJ.
  5. Ericsson KA Kintsch W (1995) Long term working memory. Psychological Review 102.
  6. Estes WK (2002) Traps in the route to models of memory and decision. Psychnomic Bulletin and Review 9: 3-25.
  7. Jagacinski R, Flach J (2003) Control theory for humans: Quantitative approaches to modeling performance. Lawrence Erlbaum Ass.
  8. Foyle DC, Hooey BL (2008) Human performance modeling in aviation. CRC Press.
  9. Gluck KA, Pew RW (2005) Modeling human behavior with integrated cognitive architectures. Lawrence Erlbaum Associates.
  10. Gore BF, Smith JD (2006) Risk assessment and human performance modeling: The need for an integrated approach. Int J Human Factors Modeling and Simulations 1.
  11. Suhir E (2011) Human-in-the-Loop. Likelihood of a vehicular mission-success-and-safety and the role of the human factor. Aerospace Conference, Big Sky, Montana, 5-12.
  12. Suhir E (2013) Miracle-on-the-Hudson: Quantified Aftermath. Int J. of Human Factors Modeling and Simulation 4.
  13. Suhir E (2014) Human-in-the-loop: Probabilistic predictive modeling, its role, attributes, challenges and applications. Theoretical Issues in Ergonomics Science 16: 99-123.
  14. Suhir E (2014) Human-in-the-loop (HITL): Probabilistic predictive modeling (PPM) of an aerospace mission/situation outcome. Aerospace 1.
  15. Suhir E, Bey C, Lini S, et al. (2014) Anticipation in aeronautics: Probabilistic assessments. Theoretical Issues in Ergonomics Science.
  16. Suhir E (2015) Human-in-the-loop and aerospace navigation success and safety: Application of probabilistic predictive modeling. SAE Conference, Seattle, WA, USA.
  17. Suhir E (2018) Human-in-the-Loop: Probabilistic Modeling of an Aerospace Mission Outcome. CRC Press, Boca Raton, London, New York.
  18. Suhir E (2018) Quantifying human factors: Towards analytical human-in-the loop. Int. J. of Human Factor Modeling and Simulation 6.
  19. Suhir E (2019) Probabilistic risk analysis (PRA) in aerospace human-in-the-loop (HITL) Tasks, plenary lecture. IHSI 2019, Biarritz, France.
  20. Suhir E (2010) Probabilistic Design for Reliability. ChipScale Reviews 14: 6.
  21. Suhir E (2005) Reliability and accelerated life testing. Semiconductor International.
  22. Suhir E (2010) Analysis of a pre-stressed bi-material Accelerated Life Test (ALT) specimen. ZAMM.
  23. Suhir E, Mahajan R, Lucero A, et al. (2012) Probabilistic design for reliability (PDfR) and a novel approach to Qualification Testing (QT). IEEE/AIAA Aerospace Conf., Big Sky, Montana.
  24. Suhir E (2012) Considering electronic product's quality specifications by application(s). ChipScale Reviews.
  25. Suhir E (2013) Assuring aerospace electronics and photonics reliability: What could and should be done differently. IEEE Aerospace Conference, Big Sky, Montana.
  26. Suhir E, Bensoussan A (2014) Quantified reliability of aerospace optoelectronics. SAE Aerospace Systems and Technology Conference, Cincinnati, OH, USA.
  27. Suhir E, Bensoussan A, Khatibi G, et al. (2014) Probabilistic design for reliability in electronics and photonics: Role, significance, attributes, challenges. Int. Reliability Physics Symp., Monterey, CA.
  28. Suhir E (2017) Probabilistic design for reliability of electronic materials, assemblies, packages and systems: attributes, challenges, pitfalls. Plenary Lecture, MMCTSE 2017, Murray Edwards College, Cambridge, UK.
  29. Suhir E, Ghaffarian R (2018) Constitutive equation for the prediction of an aerospace electron device performance-brief review. Aerospace.
  30. Suhir E (2016) Aerospace electronics-and-photonics reliability has to be quantified to be assured. AIAA SciTech Conf. San Diego, USA.
  31. Suhir E, Mogford RH (2011) Two men in a cockpit: Probabilistic assessment of the likelihood of a casualty if one of the two navigators becomes incapacitated. Journal of Aircraft 48.
  32. Suhir E, Yi S (2017) Probabilistic design for reliability (PDfR) of medical electronic devices (MEDs): When reliability is imperative, ability to quantify it is a must. Journal of SMT 30.
  33. Tversky A, Kahneman D (1974) Judgment under uncertainty: Heuristics and biases. Science 185: 1124-1131.
  34. Suhir E (2018) What could and should be done differently: Failure-oriented-accelerated-testing (FOAT) and its role in making an aerospace electronics device into a product. Journal of Materials Science: Materials in Electronics 29: 2939-2948.
  35. Suhir E (2019) Making a viable medical electron device package into a reliable product. IMAPS Advancing Microelectronics 46.
  36. Suhir E (2019) Failure-oriented-accelerated-testing (FOAT), Boltzmann-Arrhenius-Zhurkov Equation (BAZ), and their application in aerospace microelectronics and photonics reliability engineering. International Journal of Aeronautical Science & Aerospace Research 6: 185-191.
  37. Suhir E (2019) Failure-oriented-accelerated-testing and its possible application in ergonomics. Ergonomics International Journal 3.
  38. Suhir E, Bechou L (2013) Availability index and minimized reliability cost. Circuit Assemblies.
  39. Suhir E, Bechou L, Bensoussan A (2012) Technical diagnostics in electronics: Application of bayes formula and boltzmann-arrhenius-zhurkov (BAZ) model. Circuit Assembly.
  40. Suhir E (2014) Three-step concept (TSC) in modeling microelectronics reliability (MR): Boltzmann-Arrhenius-Zhurkov (BAZ) probabilistic physics-of-failure equation sandwiched between two statistical models. Microelectronics Reliability 54: 2594-2603.
  41. Suhir E (2017) Static fatigue lifetime of optical fibers assessed using boltzmann-arrhenius-zhurkov (baz) model. Journal of Materials Science: Materials in Electronics 28: 11689-11694.
  42. Suhir E, Stamenkovic Z (2020) Using yield to predict long-term reliability of integrated circuits: Application of Boltzmann-Arrhenius-Zhurkov model. Solid-State Electronics 164: 107746.
  43. Suhir E (2020) Boltzmann-arrhenius-zhurkov equation and its applications in electronic-and-photonic aerospace materials reliability-physics problems. Int. Journal of Aeronautical Science and Aerospace Research (IJASAR).
  44. Ponomarev A, Suhir E (2019) Predicted useful lifetime of aerospace electronics experiencing ionizing radiation: Application of BAZ model. Journal of Aerospace Engineering and Mechanics 3.
  45. E Suhir (2011) Analysis of a pre-stressed bi-material accelerated life test (ALT) Specimen. Zeitschrift fur Angewandte Mathematik und Mechanik 91: 371-385.
  46. Suhir E, Poborets B (1990) Solder glass attachment in cerdip/Cerquad packages: Thermally induced stresses and mechanical reliability. 40th ECTC, Las Vegas, Nevada, USA.
  47. Suhir E (1996) Applied Probability for Engineers and Scientists. McGraw-Hill.
  48. Suhir E, Ghaffarian R (2019) Electron device subjected to temperature cycling: Predicted time-to-Failure. Journal of Electronic Materials 48: 778-779.
  49. Suhir E (2018) Low-Cycle-Fatigue failures of solder material in electronics: Analytical modeling enables to predict and possibly prevent them-review. Journal of Aerospace Engineering and Mechanics 2.
  50. Hall PM (1984) Forces, moments, and displacements during thermal chamber cycling of leadless ceramic chip carriers soldered to printed boards. IEEE Transactions on Components, Hybrids, and Manufacturing Technology 7: 314-327.
  51. Hall PM (1984) Strain measurements during thermal chamber cycling of leadless ceramic chip carriers soldered to printed boards. (34th) ECTC, New Orleans, LA, USA.
  52. Hall PM (1987) Creep and stress relaxation in solder joints in surface-mounted chip carriers. Proc Electronic Component Conf (37th) Boston, MA, USA.
  53. Hall PM, Howland FL, Kim YS, et al. (1990) Strains in aluminum-adhesive-ceramic tri-layer. Journal of Electronic Packaging 112: 288-302.
  54. Burn-In (2004) MIL-STD-883F: Test method standard, microchips. Washington, DC, USA.
  55. Kececioglu D, Sun FB (1997) Burn-in-testing: Its quantification and optimization. Prentice Hall: Upper Saddle River, NJ, USA.
  56. Suhir E (2019) To burn-in, or not to burn-in: That's the question. Aerospace 6.
  57. Suhir E (2020) Is burn-in always needed?. Int J of Advanced Research in Electrical, Electronics and Instrumentation Engineering (IJAREEIE) 6: 2751-2757.
  58. Suhir E (2020) For how long should burn-in testing last? Journal of Electrical & Electronic Systems (JEES).
  59. Zaheer A, Bachmann R (2006) Handbook of trust research. Edward Elgar, Cheltenham, UK.
  60. McKnight DH, Carter M, Thatcher JB, et al. (2011) Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems.
  61. Hoff KA, Bashir M (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57: 407-434.
  62. Madhavan P, Wiegmann DA (2007) Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomic Science 8: 277-301.
  63. Rosenfeld A Kraus S (2018) Predicting human decision-making: From prediction to action. Morgan & Claypool 150.
  64. Chatzi A, Wayne M, Bates P, et al. (2019) The explored link between communication and trust in aviation maintenance practice. Aerospace 6: 66.
  65. Suhir E (2019) Adequate trust, human-capacity-factor, probability-distribution-function of human non-failure and its entropy. Int Journal of Human Factor Modeling and Simulation 7.
  66. Kaindl H, Svetinovic D (2019) Avoiding undertrust and overtrust. In joint proceedings of REFSQ-2019 workshops, Doctoral Symp., Live Studies Track and Poster Track, co-located with the 25th Int. Conf. on Requirements Engineering: Foundation for Software Quality (REFSQ 2019), Essen, Germany.
  67. Suhir E (2009) Helicopter-landing-ship: Undercarriage strength and the role of the human factor. ASME Offshore Mechanics and Arctic Engineering (OMAE) Journal 132: 011603.
  68. Salotti JM, Hedmann R, Suhir E (2014) Crew Size impact on the design, risks and cost of a human mission to mars. IEEE Aerospace Conference, Big Sky, Montana.
  69. Salotti JM, Suhir E (2014) Manned missions to mars: Minimizing risks of failure. Acta Astronautica 93: 148-161.
  70. Suhir E (2017) Human-in-the-loop: Application of the double exponential probability distribution function enables to quantify the role of the human factor. Int J of Human Factor Modeling and Simulation 5.
  71. Restle F, Greeno J (1970) Introduction to mathematical psychology. Addison Wesley, Reading, MA.
  72. Sheridan TB, Ferrell WR (1974) Man-machine systems: Information, control, and decision models of human performance, MIT Press, Cambridge, Mass.
  73. Goodstein LP, Andersen HB, Olsen SE (1988) Tasks, errors, and mental models. Taylor and Francis.
  74. Hamilton D, Bierbaum C (1990) Task analysis/workload (TAWL)-A methodology for predicting operator workload. Proc of the Human Factors and Ergonomics Society 34-th Annual Meeting, Santa Monica, CA.
  75. Hollnagel E (1993) Human reliability analysis: Context and control. Academic Press, London.
  76. Hancock PA, Caird JK (1993) Experimental evaluation of a model of mental workload. Human Factors: The Journal of the Human Factors and Ergonomics Society 35: 413-429.
  77. Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Human Factors 37: 32-64.
  78. Endsley MR, Garland DJ (2000) Situation awareness analysis and measurement. Lawrence Erlbaum Associates, Mahwah, NJ.
  79. Lebiere C (2001) A theory based model of cognitive workload and its applications. Proc of the Interservice/Industry Training, Simulation and Education Conf, Arlington, VA, NDIA.
  80. Polk TA, Seifert CM (2002) Cognitive modeling. MIT Press, Cambridge, Mass.
  81. Kirlik A (2003) Human factors distributes its workload. Review of E. Salas, Advances in Human Performance and Cognitive Engineering Research, Contemporary Psychology.
  82. Hobbs A (2004) Human factors: The last frontier of aviation safety. Int J of Aviation Psychology 14: 331-345.
  83. Diller DE, Gluck KA, Tenney YJ, et al. (2005) Comparison, convergence, and divergence in models of multitasking and category learning, and in architectures used to create them. In: Gluck KA, Pew R W, Modeling human behavior with integrated cognitive architectures: Comparison, evaluation, and validation. Lawrence Erlbaum Associates, 307-349.
  84. Lehto R, Steven JL (2008) Introduction to human factors and ergonomics for engineers. (2nd edn), Lawrence Erlbaum Associates, Taylor and Francis Group, New-York, London.
  85. Lini S, Bey C, Hourlier S, et al. (2012) Anticipation in aeronautics: Exploring pathways toward a contextualized aggregative model based on existing concepts. In: D de Waard, K Brookhuis, F Dehais, C Weikert, S Röttger, D Manzey, S Biede, F Reuzeau and P Terrier, Human factors: A view from an integrative perspective. Proceedings HFES Europe Chapter Conference Toulouse, France.
  86. Salotti JM, Claverie B (2012) Human system interactions in the design of an interplanetary mission. In: De Waard D, Brookhuis K, Dehais F, Weikert C, Röttger S, Manzey D, Biede S, Reuzeau F, Terrier P, Human factors: A view from an integrative perspective. Proceedings HFES Europe Chapter Conference, Toulouse, France.
  87. Salotti JM (2012) Revised scenario for human missions to mars. Acta Astronautica 81: 273-287.
  88. Lini S, Favier PA, André JM, et al. (2015) Influence of anticipatory time depth on cognitive load in an aeronautical context. Le Travail Humain 78: 239-256.
  89. Charles RL, Nixon J (2019) Measuring mental workload using physiological measures: A systematic review. Appl Ergon 74: 221-232.
  90. Kundlinger T, Riener A, Sofra N, et al. (2020) Driver drowsiness in automated and manual driving: Insights from a test track study. 25-th International Conference on Intelligent User Interfaces (IUI), Cagliari, Italy, ACM, New York, NY, USA.
  91. Kalske P, Unikie (2019) Private communication.
  92. Hourlier S, Suhir E (2014) Designing with consideration of the human factor: Changing the paradigm for higher safety. IEEE Aerospace Conference, Montana.
  93. Kahneman D, Slovic P, Tversky A (1982) Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
  94. Luckender C, Rathmair M, Kaindl H (2017) Investigating and coordinating safety-critical feature interactions in automotive systems using simulation. System Sciences, Proceedings of the 50th Hawaii.
  95. orasanu J, Martin L, Davison J (1998) Errors in aviation decision making: Bad decisions or bad luck? Naturalistic Decision Making, Warrington, VA.
  96. Salotti J-M, Suhir E (2014) Some major guiding principles for making future manned missions to mars safe and reliable. IEEE Aerospace Conference, Montana.
  97. Society of Automotive Engineers (2018) Taxonomy and definitions for term related to driving automation systems for on-road motor vehicles. SAE International.
  98. Sirkin DM (2019) Private communication, Stanford University.
  99. Suhir E (2018) Quantifying the roles of human error (HE) and his/her state-of-health (SH): Use of the double-exponential-probability-distribution-function. International Journal of Human Factors Modelling and Simulation 6: 140-161.
  100. Leiden K, Keller JW, French JW (2001) Context of human error in commercial aviation. Micro Analysis and Design, Boulder, CO.
  101. Suhir E (2019) Assessment of the required human capacity factor (HCF) using flight simulator as an appropriate accelerated test vehicle. Int Journal of Human Factor Modeling and Simulation 1.
  102. Suhir E, Paul G (2020) Avoiding collision in an automated driving situation. Theoretical Issues in Ergonomics Science (TIES).
  103. Suhir E, Paul G, Kaindl H (2020) Towards probabilistic analysis of human-system integration in automated driving. In: Ahram T, Karwowski W, Vergnano A, Leali F, Taiar R, Intelligent human systems integratio.
  104. Suhir E (2020) Survivability of species in different habitats: Application of multi-parametric boltzmann-arrhenius-zhurkov equation. Acta Astronautica 175: 249-253.
  105. Suhir E, Paul G (2020) Automated driving (AD): Should the variability of the available-sight-distance (ASD) be considered? Theoretical Issues in Ergonomic Science.
  106. Suhir E (2020) Head-on railway obstruction: Probabilistic model. Journal of Rail Transport Planning & Management.
  107. Suhir E (2020) Quantifying the effect of astronaut's health on his/hers performance: Application of the double-exponential probability distribution function. Theoretical Issues in Ergonomic Science.
  108. Reason J (1990) Human Error. Cambridge University Press, Cambridge, UK.
  109. Jin Y, Goto Y, Nishimoto Y, et al. (1991) Dynamic obstacle-detecting system for railway surroundings using highly accurate laser-sectioning method. Proc IEEE.
  110. Fujita T, Okano Y (1992) Integrated disaster prevention information system. Japanese Railway Engineering, 1.
  111. Leighton CL, Dennis CR (1995) Risk assessment of a new high speed railway. Quality and Reliability Engineering Int 11: 445-455.
  112. Bin N (1996) Analysis of train braking accuracy and safe protection distance in automatic train protection (ATP) systems. Wit Press, Madrid.
  113. Fernández A, Vitoriano B (2004) Railway collision risk analysis due to obstacles. In: J Allan, CA Brebbia, RJ Hill, G Sciutto, S Sone, Computers in Railways IX. WIT Press, Madrid.
  114. El Koursi M, Ching-Yao Chan, Wei-Bin Zhang (1999) Preliminary hazard analyses: A case study of advanced vehicle control and safety systems. Conference Proceedings. IEEE International Conference on Systems, Man, and Cybernetics, Piscataway, NJ, USA.

Abstract


The outcome a of crucial engineering undertaking must be quantified at the design/planning stage to assure its success and safety, and since the probability of an operational failure is, in effect, never zero, such a quantification should be done on the probabilistic basis. Some recently published work on the probabilistic predictive modeling (PPM) and probabilistic design for reliability (PDfR) of aerospace electronic and photonic (E&P) products, including human-in-the-loop (HITL) problems and challenges, is addressed and briefly reviewed. The effort was lately "brought down to earth" to model possible collision in automated driving (AD). In addition, some problems and tasks beyond the E&P and vehicular engineering field are also addressed with an objective to show how the developed methods and approached can be effectively and fruitfully employed, whenever there is a need to quantify the reliability of an engineering technology with consideration of the human performance.

Accordingly, the following nine problems have been addressed in this review with an objective to show how the outcome of a critical engineering endeavor can be predicted using PPM and PDfR concept: 1) Accelerated testing in E&P engineering: significance, attributes and challenges; 2) Failure-oriented accelerated testing (FOAT), its objective and role; 3) PPM approach and PDfR concept, their roles and applications; 4) Kinetic multi-parametric Boltzmann-Arrhenius-Zhurkov (BAZ) equation as the "heart" of the PDfR concept; 5) Burn-in-testing (BIT) of E&P products with an attempt to shed light on the basic "to BIT or not to BIT" question; 6) Adequate trust is an important constituent of the human-capacity-factor (HCF) affecting the outcome of a mission or an extraordinary situation; 7) PPM of an emergency-stopping situation in automated driving (AD) or on a railroad (RR); 8) Quantifying the astronaut's/pilot's/driver's/machinist's state of health (SoH) and its effect on his/hers performance; 9) Survivability of species in different habitats. The objective of the latter effort is to demonstrate that the developed PPM approaches and methodologies, and particularly those using multiparametric BAZ equation, could be effectively employed well beyond the vehicular engineering area.

The general concepts are illustrated by numerical examples. All the considered PPM problems were treated using analytical ("mathematical") modeling. The attributes of the such modeling, the background of the multiparametric BAZ equation and the ten major principles ("the ten commandments") of the PDfR concept are addressed in the appendices.

References

  1. Suhir E (2012) Likelihood of vehicular mission success-and-safety. J Aircraft 49.
  2. Suhir E (2018) Aerospace mission outcome: Predictive modeling. editorial, special issue challenges in reliability analysis of aerospace electronics, Aerospace 5.
  3. Baron S, Kruser DS, Huey B (1990) Quantitative modelinbg of human performance in complex dynamic systems. National Academy Press, Washington, DC.
  4. Card SK, Moran TP, Newell A (1983) The psychology of human-computer interaction. Lawrence Erlbaum Associates, Hillside, NJ.
  5. Ericsson KA Kintsch W (1995) Long term working memory. Psychological Review 102.
  6. Estes WK (2002) Traps in the route to models of memory and decision. Psychnomic Bulletin and Review 9: 3-25.
  7. Jagacinski R, Flach J (2003) Control theory for humans: Quantitative approaches to modeling performance. Lawrence Erlbaum Ass.
  8. Foyle DC, Hooey BL (2008) Human performance modeling in aviation. CRC Press.
  9. Gluck KA, Pew RW (2005) Modeling human behavior with integrated cognitive architectures. Lawrence Erlbaum Associates.
  10. Gore BF, Smith JD (2006) Risk assessment and human performance modeling: The need for an integrated approach. Int J Human Factors Modeling and Simulations 1.
  11. Suhir E (2011) Human-in-the-Loop. Likelihood of a vehicular mission-success-and-safety and the role of the human factor. Aerospace Conference, Big Sky, Montana, 5-12.
  12. Suhir E (2013) Miracle-on-the-Hudson: Quantified Aftermath. Int J. of Human Factors Modeling and Simulation 4.
  13. Suhir E (2014) Human-in-the-loop: Probabilistic predictive modeling, its role, attributes, challenges and applications. Theoretical Issues in Ergonomics Science 16: 99-123.
  14. Suhir E (2014) Human-in-the-loop (HITL): Probabilistic predictive modeling (PPM) of an aerospace mission/situation outcome. Aerospace 1.
  15. Suhir E, Bey C, Lini S, et al. (2014) Anticipation in aeronautics: Probabilistic assessments. Theoretical Issues in Ergonomics Science.
  16. Suhir E (2015) Human-in-the-loop and aerospace navigation success and safety: Application of probabilistic predictive modeling. SAE Conference, Seattle, WA, USA.
  17. Suhir E (2018) Human-in-the-Loop: Probabilistic Modeling of an Aerospace Mission Outcome. CRC Press, Boca Raton, London, New York.
  18. Suhir E (2018) Quantifying human factors: Towards analytical human-in-the loop. Int. J. of Human Factor Modeling and Simulation 6.
  19. Suhir E (2019) Probabilistic risk analysis (PRA) in aerospace human-in-the-loop (HITL) Tasks, plenary lecture. IHSI 2019, Biarritz, France.
  20. Suhir E (2010) Probabilistic Design for Reliability. ChipScale Reviews 14: 6.
  21. Suhir E (2005) Reliability and accelerated life testing. Semiconductor International.
  22. Suhir E (2010) Analysis of a pre-stressed bi-material Accelerated Life Test (ALT) specimen. ZAMM.
  23. Suhir E, Mahajan R, Lucero A, et al. (2012) Probabilistic design for reliability (PDfR) and a novel approach to Qualification Testing (QT). IEEE/AIAA Aerospace Conf., Big Sky, Montana.
  24. Suhir E (2012) Considering electronic product's quality specifications by application(s). ChipScale Reviews.
  25. Suhir E (2013) Assuring aerospace electronics and photonics reliability: What could and should be done differently. IEEE Aerospace Conference, Big Sky, Montana.
  26. Suhir E, Bensoussan A (2014) Quantified reliability of aerospace optoelectronics. SAE Aerospace Systems and Technology Conference, Cincinnati, OH, USA.
  27. Suhir E, Bensoussan A, Khatibi G, et al. (2014) Probabilistic design for reliability in electronics and photonics: Role, significance, attributes, challenges. Int. Reliability Physics Symp., Monterey, CA.
  28. Suhir E (2017) Probabilistic design for reliability of electronic materials, assemblies, packages and systems: attributes, challenges, pitfalls. Plenary Lecture, MMCTSE 2017, Murray Edwards College, Cambridge, UK.
  29. Suhir E, Ghaffarian R (2018) Constitutive equation for the prediction of an aerospace electron device performance-brief review. Aerospace.
  30. Suhir E (2016) Aerospace electronics-and-photonics reliability has to be quantified to be assured. AIAA SciTech Conf. San Diego, USA.
  31. Suhir E, Mogford RH (2011) Two men in a cockpit: Probabilistic assessment of the likelihood of a casualty if one of the two navigators becomes incapacitated. Journal of Aircraft 48.
  32. Suhir E, Yi S (2017) Probabilistic design for reliability (PDfR) of medical electronic devices (MEDs): When reliability is imperative, ability to quantify it is a must. Journal of SMT 30.
  33. Tversky A, Kahneman D (1974) Judgment under uncertainty: Heuristics and biases. Science 185: 1124-1131.
  34. Suhir E (2018) What could and should be done differently: Failure-oriented-accelerated-testing (FOAT) and its role in making an aerospace electronics device into a product. Journal of Materials Science: Materials in Electronics 29: 2939-2948.
  35. Suhir E (2019) Making a viable medical electron device package into a reliable product. IMAPS Advancing Microelectronics 46.
  36. Suhir E (2019) Failure-oriented-accelerated-testing (FOAT), Boltzmann-Arrhenius-Zhurkov Equation (BAZ), and their application in aerospace microelectronics and photonics reliability engineering. International Journal of Aeronautical Science & Aerospace Research 6: 185-191.
  37. Suhir E (2019) Failure-oriented-accelerated-testing and its possible application in ergonomics. Ergonomics International Journal 3.
  38. Suhir E, Bechou L (2013) Availability index and minimized reliability cost. Circuit Assemblies.
  39. Suhir E, Bechou L, Bensoussan A (2012) Technical diagnostics in electronics: Application of bayes formula and boltzmann-arrhenius-zhurkov (BAZ) model. Circuit Assembly.
  40. Suhir E (2014) Three-step concept (TSC) in modeling microelectronics reliability (MR): Boltzmann-Arrhenius-Zhurkov (BAZ) probabilistic physics-of-failure equation sandwiched between two statistical models. Microelectronics Reliability 54: 2594-2603.
  41. Suhir E (2017) Static fatigue lifetime of optical fibers assessed using boltzmann-arrhenius-zhurkov (baz) model. Journal of Materials Science: Materials in Electronics 28: 11689-11694.
  42. Suhir E, Stamenkovic Z (2020) Using yield to predict long-term reliability of integrated circuits: Application of Boltzmann-Arrhenius-Zhurkov model. Solid-State Electronics 164: 107746.
  43. Suhir E (2020) Boltzmann-arrhenius-zhurkov equation and its applications in electronic-and-photonic aerospace materials reliability-physics problems. Int. Journal of Aeronautical Science and Aerospace Research (IJASAR).
  44. Ponomarev A, Suhir E (2019) Predicted useful lifetime of aerospace electronics experiencing ionizing radiation: Application of BAZ model. Journal of Aerospace Engineering and Mechanics 3.
  45. E Suhir (2011) Analysis of a pre-stressed bi-material accelerated life test (ALT) Specimen. Zeitschrift fur Angewandte Mathematik und Mechanik 91: 371-385.
  46. Suhir E, Poborets B (1990) Solder glass attachment in cerdip/Cerquad packages: Thermally induced stresses and mechanical reliability. 40th ECTC, Las Vegas, Nevada, USA.
  47. Suhir E (1996) Applied Probability for Engineers and Scientists. McGraw-Hill.
  48. Suhir E, Ghaffarian R (2019) Electron device subjected to temperature cycling: Predicted time-to-Failure. Journal of Electronic Materials 48: 778-779.
  49. Suhir E (2018) Low-Cycle-Fatigue failures of solder material in electronics: Analytical modeling enables to predict and possibly prevent them-review. Journal of Aerospace Engineering and Mechanics 2.
  50. Hall PM (1984) Forces, moments, and displacements during thermal chamber cycling of leadless ceramic chip carriers soldered to printed boards. IEEE Transactions on Components, Hybrids, and Manufacturing Technology 7: 314-327.
  51. Hall PM (1984) Strain measurements during thermal chamber cycling of leadless ceramic chip carriers soldered to printed boards. (34th) ECTC, New Orleans, LA, USA.
  52. Hall PM (1987) Creep and stress relaxation in solder joints in surface-mounted chip carriers. Proc Electronic Component Conf (37th) Boston, MA, USA.
  53. Hall PM, Howland FL, Kim YS, et al. (1990) Strains in aluminum-adhesive-ceramic tri-layer. Journal of Electronic Packaging 112: 288-302.
  54. Burn-In (2004) MIL-STD-883F: Test method standard, microchips. Washington, DC, USA.
  55. Kececioglu D, Sun FB (1997) Burn-in-testing: Its quantification and optimization. Prentice Hall: Upper Saddle River, NJ, USA.
  56. Suhir E (2019) To burn-in, or not to burn-in: That's the question. Aerospace 6.
  57. Suhir E (2020) Is burn-in always needed?. Int J of Advanced Research in Electrical, Electronics and Instrumentation Engineering (IJAREEIE) 6: 2751-2757.
  58. Suhir E (2020) For how long should burn-in testing last? Journal of Electrical & Electronic Systems (JEES).
  59. Zaheer A, Bachmann R (2006) Handbook of trust research. Edward Elgar, Cheltenham, UK.
  60. McKnight DH, Carter M, Thatcher JB, et al. (2011) Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems.
  61. Hoff KA, Bashir M (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57: 407-434.
  62. Madhavan P, Wiegmann DA (2007) Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomic Science 8: 277-301.
  63. Rosenfeld A Kraus S (2018) Predicting human decision-making: From prediction to action. Morgan & Claypool 150.
  64. Chatzi A, Wayne M, Bates P, et al. (2019) The explored link between communication and trust in aviation maintenance practice. Aerospace 6: 66.
  65. Suhir E (2019) Adequate trust, human-capacity-factor, probability-distribution-function of human non-failure and its entropy. Int Journal of Human Factor Modeling and Simulation 7.
  66. Kaindl H, Svetinovic D (2019) Avoiding undertrust and overtrust. In joint proceedings of REFSQ-2019 workshops, Doctoral Symp., Live Studies Track and Poster Track, co-located with the 25th Int. Conf. on Requirements Engineering: Foundation for Software Quality (REFSQ 2019), Essen, Germany.
  67. Suhir E (2009) Helicopter-landing-ship: Undercarriage strength and the role of the human factor. ASME Offshore Mechanics and Arctic Engineering (OMAE) Journal 132: 011603.
  68. Salotti JM, Hedmann R, Suhir E (2014) Crew Size impact on the design, risks and cost of a human mission to mars. IEEE Aerospace Conference, Big Sky, Montana.
  69. Salotti JM, Suhir E (2014) Manned missions to mars: Minimizing risks of failure. Acta Astronautica 93: 148-161.
  70. Suhir E (2017) Human-in-the-loop: Application of the double exponential probability distribution function enables to quantify the role of the human factor. Int J of Human Factor Modeling and Simulation 5.
  71. Restle F, Greeno J (1970) Introduction to mathematical psychology. Addison Wesley, Reading, MA.
  72. Sheridan TB, Ferrell WR (1974) Man-machine systems: Information, control, and decision models of human performance, MIT Press, Cambridge, Mass.
  73. Goodstein LP, Andersen HB, Olsen SE (1988) Tasks, errors, and mental models. Taylor and Francis.
  74. Hamilton D, Bierbaum C (1990) Task analysis/workload (TAWL)-A methodology for predicting operator workload. Proc of the Human Factors and Ergonomics Society 34-th Annual Meeting, Santa Monica, CA.
  75. Hollnagel E (1993) Human reliability analysis: Context and control. Academic Press, London.
  76. Hancock PA, Caird JK (1993) Experimental evaluation of a model of mental workload. Human Factors: The Journal of the Human Factors and Ergonomics Society 35: 413-429.
  77. Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Human Factors 37: 32-64.
  78. Endsley MR, Garland DJ (2000) Situation awareness analysis and measurement. Lawrence Erlbaum Associates, Mahwah, NJ.
  79. Lebiere C (2001) A theory based model of cognitive workload and its applications. Proc of the Interservice/Industry Training, Simulation and Education Conf, Arlington, VA, NDIA.
  80. Polk TA, Seifert CM (2002) Cognitive modeling. MIT Press, Cambridge, Mass.
  81. Kirlik A (2003) Human factors distributes its workload. Review of E. Salas, Advances in Human Performance and Cognitive Engineering Research, Contemporary Psychology.
  82. Hobbs A (2004) Human factors: The last frontier of aviation safety. Int J of Aviation Psychology 14: 331-345.
  83. Diller DE, Gluck KA, Tenney YJ, et al. (2005) Comparison, convergence, and divergence in models of multitasking and category learning, and in architectures used to create them. In: Gluck KA, Pew R W, Modeling human behavior with integrated cognitive architectures: Comparison, evaluation, and validation. Lawrence Erlbaum Associates, 307-349.
  84. Lehto R, Steven JL (2008) Introduction to human factors and ergonomics for engineers. (2nd edn), Lawrence Erlbaum Associates, Taylor and Francis Group, New-York, London.
  85. Lini S, Bey C, Hourlier S, et al. (2012) Anticipation in aeronautics: Exploring pathways toward a contextualized aggregative model based on existing concepts. In: D de Waard, K Brookhuis, F Dehais, C Weikert, S Röttger, D Manzey, S Biede, F Reuzeau and P Terrier, Human factors: A view from an integrative perspective. Proceedings HFES Europe Chapter Conference Toulouse, France.
  86. Salotti JM, Claverie B (2012) Human system interactions in the design of an interplanetary mission. In: De Waard D, Brookhuis K, Dehais F, Weikert C, Röttger S, Manzey D, Biede S, Reuzeau F, Terrier P, Human factors: A view from an integrative perspective. Proceedings HFES Europe Chapter Conference, Toulouse, France.
  87. Salotti JM (2012) Revised scenario for human missions to mars. Acta Astronautica 81: 273-287.
  88. Lini S, Favier PA, André JM, et al. (2015) Influence of anticipatory time depth on cognitive load in an aeronautical context. Le Travail Humain 78: 239-256.
  89. Charles RL, Nixon J (2019) Measuring mental workload using physiological measures: A systematic review. Appl Ergon 74: 221-232.
  90. Kundlinger T, Riener A, Sofra N, et al. (2020) Driver drowsiness in automated and manual driving: Insights from a test track study. 25-th International Conference on Intelligent User Interfaces (IUI), Cagliari, Italy, ACM, New York, NY, USA.
  91. Kalske P, Unikie (2019) Private communication.
  92. Hourlier S, Suhir E (2014) Designing with consideration of the human factor: Changing the paradigm for higher safety. IEEE Aerospace Conference, Montana.
  93. Kahneman D, Slovic P, Tversky A (1982) Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
  94. Luckender C, Rathmair M, Kaindl H (2017) Investigating and coordinating safety-critical feature interactions in automotive systems using simulation. System Sciences, Proceedings of the 50th Hawaii.
  95. orasanu J, Martin L, Davison J (1998) Errors in aviation decision making: Bad decisions or bad luck? Naturalistic Decision Making, Warrington, VA.
  96. Salotti J-M, Suhir E (2014) Some major guiding principles for making future manned missions to mars safe and reliable. IEEE Aerospace Conference, Montana.
  97. Society of Automotive Engineers (2018) Taxonomy and definitions for term related to driving automation systems for on-road motor vehicles. SAE International.
  98. Sirkin DM (2019) Private communication, Stanford University.
  99. Suhir E (2018) Quantifying the roles of human error (HE) and his/her state-of-health (SH): Use of the double-exponential-probability-distribution-function. International Journal of Human Factors Modelling and Simulation 6: 140-161.
  100. Leiden K, Keller JW, French JW (2001) Context of human error in commercial aviation. Micro Analysis and Design, Boulder, CO.
  101. Suhir E (2019) Assessment of the required human capacity factor (HCF) using flight simulator as an appropriate accelerated test vehicle. Int Journal of Human Factor Modeling and Simulation 1.
  102. Suhir E, Paul G (2020) Avoiding collision in an automated driving situation. Theoretical Issues in Ergonomics Science (TIES).
  103. Suhir E, Paul G, Kaindl H (2020) Towards probabilistic analysis of human-system integration in automated driving. In: Ahram T, Karwowski W, Vergnano A, Leali F, Taiar R, Intelligent human systems integratio.
  104. Suhir E (2020) Survivability of species in different habitats: Application of multi-parametric boltzmann-arrhenius-zhurkov equation. Acta Astronautica 175: 249-253.
  105. Suhir E, Paul G (2020) Automated driving (AD): Should the variability of the available-sight-distance (ASD) be considered? Theoretical Issues in Ergonomic Science.
  106. Suhir E (2020) Head-on railway obstruction: Probabilistic model. Journal of Rail Transport Planning & Management.
  107. Suhir E (2020) Quantifying the effect of astronaut's health on his/hers performance: Application of the double-exponential probability distribution function. Theoretical Issues in Ergonomic Science.
  108. Reason J (1990) Human Error. Cambridge University Press, Cambridge, UK.
  109. Jin Y, Goto Y, Nishimoto Y, et al. (1991) Dynamic obstacle-detecting system for railway surroundings using highly accurate laser-sectioning method. Proc IEEE.
  110. Fujita T, Okano Y (1992) Integrated disaster prevention information system. Japanese Railway Engineering, 1.
  111. Leighton CL, Dennis CR (1995) Risk assessment of a new high speed railway. Quality and Reliability Engineering Int 11: 445-455.
  112. Bin N (1996) Analysis of train braking accuracy and safe protection distance in automatic train protection (ATP) systems. Wit Press, Madrid.
  113. Fernández A, Vitoriano B (2004) Railway collision risk analysis due to obstacles. In: J Allan, CA Brebbia, RJ Hill, G Sciutto, S Sone, Computers in Railways IX. WIT Press, Madrid.
  114. El Koursi M, Ching-Yao Chan, Wei-Bin Zhang (1999) Preliminary hazard analyses: A case study of advanced vehicle control and safety systems. Conference Proceedings. IEEE International Conference on Systems, Man, and Cybernetics, Piscataway, NJ, USA.