The Outcome of an Engineering Undertaking of Importance Must be Quantified to Assure its Success and Safety: Review

The outcome a of crucial engineering undertaking must be quantified at the design/planning stage to assure its success and safety, and since the probability of an operational failure is, in effect, never zero, such a quantification should be done on the probabilistic basis. Some recently published work on the probabilistic predictive modeling (PPM) and probabilistic design for reliability (PDfR) of aerospace electronic and photonic (E&P) products, including human-in-theloop (HITL) problems and challenges, is addressed and briefly reviewed. The effort was lately “brought down to earth” to model possible collision in automated driving (AD). In addition, some problems and tasks beyond the E&P and vehicular engineering field are also addressed with an objective to show how the developed methods and approached can be effectively and fruitfully employed, whenever there is a need to quantify the reliability of an engineering technology with consideration of the human performance. Accordingly, the following nine problems have been addressed in this review with an objective to show how the outcome of a critical engineering endeavor can be predicted using PPM and PDfR concept: 1) Accelerated testing in E&P engineering: Significance, attributes and challenges; 2) Failure-oriented accelerated testing (FOAT), its objective and role; 3) PPM approach and PDfR concept, their roles and applications; 4) Kinetic multi-parametric Boltzmann-ArrheniusZhurkov (BAZ) equation as the “heart” of the PDfR concept; 5) Burn-in-testing (BIT) of E&P products with an attempt to shed light on the basic “to BIT or not to BIT” question; 6) Adequate trust is an important constituent of the humancapacity-factor (HCF) affecting the outcome of a mission or an extraordinary situation; 7) PPM of an emergency-stopping situation in automated driving (AD) or on a railroad (RR); 8) Quantifying the astronaut’s/pilot’s/driver’s/machinist’s state of health (SoH) and its effect on his/hers performance; 9) Survivability of species in different habitats. The objective of the latter effort is to demonstrate that the developed PPM approaches and methodologies, and particularly those using multiparametric BAZ equation, could be effectively employed well beyond the vehicular engineering area. The general concepts are illustrated by numerical examples. All the considered PPM problems were treated using analytical (“mathematical”) modeling. The attributes of the such modeling, the background of the multiparametric BAZ equation and the ten major principles (“the ten commandments”) of the PDfR concept are addressed in the appendices. Check for updates Citation: Suhir E (2020) The Outcome of an Engineering Undertaking of Importance Must be Quantified to Assure its Success and Safety: Review. J Aerosp Eng Mech 4(2):218-253 Suhir. J Aerosp Eng Mech 2020, 4(1):218-253 Open Access | Page 219 | human performance into a well substantiated and “reliable” science. Tversky and Kahneman (1979 Nobel Memorial Prize in Economics) [33] where, perhaps, the first who indicated the importance of considering the role of uncertainties in decision making and, particularly, in analyzing the role of cognitive biases that affect decision making in life and work. Since, however, these investigators were, although outstanding, but traditional human psychiatrists, no quantitative, not to mention probabilistic, assessments, were suggested. It should be pointed out that while the traditional statistical human-factor-oriented approaches are based mostly on experimentations followed by statistical analyses, an important feature of the PDfR concept is that it is based upon, and start with, a physically meaningful and flexible predictive model (such as the BAZ one) geared to the appropriate FOAT [34-37]. Statistics and/or experimentation can be applied afterwards, to establish the important numerical characteristics of the selected model (such as, say, the mean value and the standard deviation in a normal distribution) and/or to confirm the suitability of a particular model for the application of interest. The highly-focused and highly cost-effective FOAT, the “heart” of the PDfR concept, is aimed, first of all, at understanding and/or at confirming the anticipated physics of failure (see Table 1 below). The traditional, about forty-yearsold, highly accelerated life testing (HALT), although sheds important light on the reliability of the E&P product of interest (bad things would not last for forty years, would they?), does not quantify reliability and, because of that could hardly improve our understanding of the device’s and/or package’s physics of failure. FOAT, geared to a physically meaningful PDfR model, can be used as an appropriate extension and modification of HALT. An important attribute of the PPM/ PDfR/FOAT based approach is if the predicted probability of non-failure, based on the applied PDfR methodology and FOAT effort, is, for whatever reason, not acceptable, then an appropriate sensitivity analysis (SA) using the already developed and available algorithms and calculation procedures can be effectively conducted to improve the situation without resorting to additional expensive and time-consuming testing. Such a cost-effective and insightful approach is applicable, with the appropriate modifications and generalizations, if necessary, to numerous, not even necessarily in the vehicular domain, when a human-in-control encounters an uncertain environment or a hazardous situation. The suggested quantification-based HITL approach is applicable also when there is an incentive to quantify human’s qualifications and/or when there is a need to assess and possibly improve human performance and possible role in a particular engagement. An important additional consideration in favor of quantification of the reliability has to do with the always desirable optimizations. The best engineering product is, in effect, as is known, the best compromise between the requirements for its reliability, cost effectiveness and time-to-market (to completion). The latter two requirements are always quantified. No effective optimization could be achieved, of course, if reliability is not optimized as well. In the HITL situations, such an optimization should be done considering the role of the human factor. Modeling; PDF: Probability Distribution Function; PDfR: Probabilistic Design for Reliability; PHM: Prognostics and Health Monitoring; PPM: Probabilistic Predictive Modeling; PRA: Probabilistic Risk Analysis; QT: Qualification Testing; RR: Railroad; RUL: Remaining Useful Life; SAE: Society of Automotive Engineers; SF: Safety Factor; SJI: Solder Joint Interconnections; SoH: State of Health; SFR: Statistical Failure Rate; TTF: Time-to-Failure


St. Paul
Progress in vehicular safety is achieved today mostly through various, predominantly experimental and posteriori-statistical, ways to improve the hard-and software of the instrumentation and equipment, implement better ergonomics, and introduce and advance other more or less well established efforts of experimental reliability engineering and traditional human psychology that directly affect product's reliability and human performance. There exists, however, a significant potential for the reduction in accidents and casualties in aerospace, maritime, automotive and railroad engineering through better understanding the role that various uncertainties play in the planner's and operator's worlds of work, when never failure-free navigation equipment and instrumentation, never hundred percent predictable response of the object of control (air-or spacecraft, a car, a train, or an ocean-going vessel), uncertain-and-often-harsh environment and never-perfect human performance contribute jointly to the outcome of a vehicular mission or an extraordinary situation. By employing quantifiable and measurable ways of assessing the role and significance of critical uncertainties and treating HITL as a part, often the most crucial part, of a complex man-instrumentation-vehicle-environment-navigation system and its critical interfaces, one could improve dramatically the state-of-the-art in assuring operational safety of a vehicle and its passengers and crew. This can be done by predicting, quantifying and, if necessary and possible, even specifying an adequate (typically low enough, but different for different vehicles, missions and circumstances) probability of success and safety of a mission or an off-normal situation [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19].
Nothing and nobody is perfect, and the difference between a highly reliable technology, object, product, performance or a mission and an insufficiently reliable one is "merely" in the levels of their never-zero probability of failure. Application of the PPM approach and the PDfR concept [20][21][22][23][24][25][26][27][28][29][30][31] provide a natural and an effective means for reduction of vehicular casualties. This approach, as has been indicated, can be applied also beyond the vehicular field, in devices whose operational reliability is critical, such as, e.g., military, longhaul communications systems or medical devices [32]. When success and safety of a critical undertaking are imperative, ability to predict and quantify its outcome is paramount. The application of the PDfR concept can improve dramatically the state-of-the-art in reliability and quality engineering by making the art of creating reliable products and assure adequate A typical example of product development testing (PDT) is shear-off testing conducted when there is a need to determine the most feasible bonding material and its thickness, and/or to assess its bonding strength and/or to evaluate the shear modulus of the material. HALT is currently widely employed, in different modifications, with an intent to determine the product's reliability weaknesses, assess its reliability limits, and ruggedize the product by applying elevated stresses (not necessarily mechanical and not necessarily limited to the anticipated field stresses) that could cause field failures, and to provide supposedly large (although, actually, unknown) safety margins over expected in-use conditions. HALT often involves step-wise stressing, rapid thermal transitions, and other means that enable to carry out testing in a time-and cost-effective fashion. HALT is a "discovery" test. It is not a qualification test (QT) though, i.e. not a "pass/fail" test. It is In the review that follows some important problems and tasks associated with assuring success and safety of vehicular and other engineering undertakings, with an objective to show what could and should be done differently, when high reliability is imperative, and should be quantified to assure its adequate level and cost effectiveness. A simple example on how to optimize reliability [38] indicates that optimization of reliability can be achieved by optimizing the product's availability -the probability that the product is sound, i.e. available to the user, when needed. When encountering a particular reliability problem at the design, fabrication, testing, or an operation stage of the product's life, and considering the use of predictive modeling to assess the seriousness and the likely consequences of its detected failure, one has to choose whether a statistical, or a physics-of-failure-based, or a suitable combination of these two major modeling tools should be employed to address the problem of interest and to decide on how to proceed.
A three-step concept (TSC) is suggested as a possible way to go in such a situation [39,40]. The classical statistical Bayes formula can be used at the first step as a technical diagnostics tool, with an objective is to identify, on the probabilistic basis, the faulty (malfunctioning) device(s) from the obtained signals ("symptoms of faults"). The multi-parametric BAZ model can be employed at the TSC's second step to assess the remaining useful life (RUL) of the faulty device(s). If the assessed RUL is still long enough, no action might be needed, but if it is not, corrective restoration action becomes necessary. In any event, after the first two steps are carried out, the device is put back into operation, provided that the assessed probability of its continuing failure-free operation is found to be satisfactory. If failure nonetheless occurs, the third step should be undertaken to update reliability. Statistical beta-distribution, in which the probability of failure itself is treated as a random variable, is suggested to be used at this step. The suggested concept is illustrated by a numerical example geared to the use of the prognostics-and-health-monitoring (PHM) effort in actual operation, such as, e.g., en-route flight mission.
The major principles of an analytical modeling approach, yet and no best practices nor suitable HALT methodologies are not yet developed. Quantitative estimates based on the FOAT and subsequent PPM might not be perfect, at least at the beginning, but it is still better to pursue this effort rather than to turn a blind eye on the never-zero probability of the product's failure and that the reliability of an E&P product cannot be assured, if this probability is not assessed and made adequate for the given product. If one sets out to understand the physics of failure to create, in accordance with the "principle of practical confidence", a failure-free product, conducting FOAT is imperative to confirm the usage of a particular predictive model, such as BAZ equation, to confirm the physics of failure, and establish the numerical characteristics (activation energy, time constant, sensitivity factors, etc.) of the selected model.
FOAT could be viewed as an extension of HALT, but while HALT is a "black box", i.e., a methodology which can be perceived in terms of its inputs and outputs without clear knowledge of the underlying physics and the likelihood of failure, FOAT is a "transparent box", whose objective is to confirm the use of a particular reliability model. While HALT does not measure (does not quantify) reliability, FOAT does. The major assumption is, of course, that this model should be valid for both accelerated and actual operation conditions. HALT that tries to "kill many unknown birds with one (also not very well known) stone" has demonstrated, however, over the years its ability to improve robustness through a "test-fail-fix" process, in which the applied stresses (stimuli) are somewhat above the specified operating limits. This "somewhat above" is based, however, on an intuition, rather than on a calculation.
There is a general, and, to great extent, justified, perception that HALT is able to precipitate and identify failures of different origins. HALT can be used therefore for "rough tuning" of product's reliability, and FOAT could be employed when "fine tuning" is needed, i.e., when there is a need to quantify, assure and even specify the operational reliability of a product. The FOAT based approach could be viewed as a quantified and reliability physics oriented HALT. The FOAT approach should be geared to a particular technology and application, with consideration of the most likely stressors. FOAT and HALT could be carried out separately, or might be partially combined in a particular AT effort. New products present natural reliability concerns, as well as significant challenges at all the stages of their design, manufacture and use. An appropriate combination of HALT and FOAT efforts could be especially useful for ruggedizing and quantifying reliability of such products. It is always necessary to correctly identify the expected failure modes and mechanisms, and to establish the appropriate stress limits of HALTs and FOATs with an objective to prevent "shifts" in the dominant failure mechanisms. There are many ways of how this could be done. E.g., the test specimens could be mechanically pre-stressed, so that the temperature cycling could be carried out in a more narrow range of temperatures [45]. But a better way seems to be replacement of temperature cycling with a more cost-effective, less time consuming and, most importantly, more physically meaningful accelerated test, such as low-temperature/random-vibrations bias (see section 4.3).
the QT that is the major means for making a viable E&P device or package into a justifiably marketable product. While many HALT aspects are different for different manufacturers and often kept as proprietary information, QTs and standards are the same for the given industry and product type. Burn-in testing (BIT) is a post-manufacturing test. Mass fabrication, no matter how good the design effort and the fabrication technologies are, generates, in addition to desirable and relatively robust ("strong") products, also some undesirable and unreliable ("weak") devices ("freaks"), which, if shipped to the customer, will most likely fail in the field. BIT is supposed to detect and to eliminate such "freaks". As a result, the final bathtub curve (BTC) of a product that underwent BIT is not supposed to contain the infant mortality portion (IMP). In the today's practice BIT, a destructive test for the "freaks" and a non-destructive for the healthy devices, is often run within the framework of, and concurrently with, HALT.
Are the today's practices based on the above accelerated testing adequate? A funny, but quite practical, definition of a sufficiently robust E&P product is that "reliability it is when the customer comes back, not the product". It is well known, however, that E&P products that underwent HALT, passed the existing QTs and survived BIT often prematurely fail in the field. So, what could and should be done differently?

Failure-oriented-accelerated-testing (FOAT), its objective and role
"Say not, "I have found the truth," but rather, "I have found a truth." Kahlil Gibran, Lebanese artist, poet and writer One crucial shortcoming of the today's E&P reliability assurance practices is that they are seldom based on good understanding the underlying reliability physics for the particular E&P product, but most importantly, although claim its lifetime, do not suggest a trustworthy effort to quantify it. A possible way to go is to design and conduct FOAT aimed, first of all, at understanding and confirming the anticipated physics of failure, but also on using the FOAT data to predict the operational reliability of the product (last column in Table 1).
To do that, FOAT should be geared to an adequate, simple, easy-to-use and physically meaningful predictive model. BAZ (see Appendix B and section 3 below) model can be employed in this capacity.
Predictive modeling has proven for many years to be a highly useful and a highly time-and cost-effective means for understanding the physics of failure in reliability engineering, as well as for designing the most effective accelerated tests. It has been recently suggested that a highly focused (on the most vulnerable material and/or structural element of the design, such as, e.g., solder joint interconnections) and, to an extent possible, highly cost effective FOAT is considered as the experimental basis, the "heart", of the new fruitful, flexible and physically meaningful design-for-reliability concept -PDfR (see next section for details). FOAT should be conducted in addition to, and, in some cases, even instead of, HALT, especially for new products, whose operational reliability is, as a rule, unclear and for which no experience is accumulated is the error (Laplace) function, If, e.g., the elastic constants of the solder glass are E g = 0.66 × 10 6 kg/cm 2 and v g = 0.27, the sealing (fabrication) temperature is 485 °C the lowest (testing) temperature is -65 °C (so that ∆t = 550 °C), the computed effective PPM approach and PDfR concept, their roles and applications "A pinch of probability is worth a pound of perhaps." James G. Thurber, American writer and cartoonist Design for reliability (DfR) is, as is known, a set of approaches, methods and best practices that are supposed to be used at the design stage of a product to minimize the risk that the fabricated product might not meet the reliability objectives and customer expectations. When deterministic approach is used, the safety factor (SF) is defined as the ra- of the capacity ("strength") C of the product to the demand ("stress") D. When PDfR approach is considered, the SF can be introduced as the ratio Reliable seal glass bond in a ceramic package design: AT&T ceramic packages fabricated at its Allentown (former "Western Electric") facility in mid-nineties experienced numerous failures during accelerated tests. It has been determined that this happened because the seal/solder glass that bonded two ceramic parts had a higher coefficient of thermal expansion (CTE) than the ceramic lid and substrate, and therefore, when the packages were cooled down from the high manufacturing temperature of about 800-900 °C to the room temperature, all the packages cracked. To design a reliable seal we had not only to replace the existing seal glass with a glass material that would have a lower CTE than the ceramics, but, in addition to that, we had to make sure that the interfacial shearing stresses at the ceramics/glass interfaces subjected to compression at low temperatures would be low enough not to crack the seal glass material. Treating the CTE's of the brittle ceramic and brittle glass materials as normally distributed random variables, the following PDfR methodology was developed and applied. No failures were observed in the manufactured packages, designed and manufactured based on the developed methodology. Here is how a reliable seal glass material was selected in a ceramic IC package using this PDfR approach [46].
The maximum interfacial shearing stress in a thin solder glass layer in a ceramic package design can be computed as is the parameter of the interfacial shearing stress, is the axial compliance of the assembly, is small compared to the first term and can be omitted. Then we obtain: As evident from this result, the ratio of the extreme response y n , after n cycles are applied, to the maximum response x D , when a single cycle is applied, is 2 ln n . This ratio is 3.2552 for 200 cycles, 3.7169 for 1000 cycles, and 4.1273 for 5000 cycles.
Adequate heat sink: Consider a heat-sink whose steady-state operation is determined by the Arrhenius equation (B-2) [28] (Appendix A). The probability of non-failure can be found using the exponential law of reliability as Solving this equation for the absolute temperature T, we have: Addressing, e.g., a failure caused by the surface charge accumulation, for which the ratio of the activation energy to the Boltzmann's constant is 11600 U K k =°, assuming that the FOAT-predicted time factor τ 0 is τ 0 = 2 × 10 -5 h, that the customer requires that the probability of failure at the end of the service time of t = 40,000 h is, say, Q = 10 -5 , the obtained formula for the required temperature yields: T = 3523 °K = 79.3 °C. Thus, the heat sink should be designed accordingly, and the product manufacturer should require that the ven-dor manufactures and delivers such a heat sink. The situa-tion changes to the worse, if the temperature of the device changes, especially in a random fashion (see previous exam-ple 4.3.2), but this situation can also be predicted by a simple probabilistic analysis, which is, however, beyond the scope of this article.

Kinetic Multi-Parametric Baz Equation as the "Heart" of the Pdfr Concept
"Everyone knows that we live in the era of engineering, however, he rarely realizes that literally all our engineering is based on mathematics and physics" -Bartel Leendert van der Waerden, Dutch mathematician

Electronic package subjected to the combined action of two stressors
The rationale behind the BAZ equation is described in Ap-pendix B. Let us consider, for the sake of simplicity, the action of just two stressors [49,50]: elevated humidity H and elevat-ed voltage V. If the level I * of the leakage current is accepted as the suitable criterion of material/structural failure, then the equation (B-2) can be written as CTE's at this temperature are ᾱ g = 6.75 × 10 -6 1/°C and ᾱ c = 7.20 × 10 -6 1/°C, the standard deviations of these STEs are is σ u = 5500 kg/cm 2 , then, with the acceptable SF of, say, 4, we have σ * = σ u/4 = 1375 kg/cm 2 . The allowable level of the CTE parameter ψ = α c -α g is therefore , and the corresponding probability of non-failure of the seal glass material is Note that if the standard deviations of the materials CTEs were only , then the SFs γ and γ * and the probability P of non-failure would be significantly higher: γ = 3.1825, γ * = 19.5556 and P = 0.999.

Application of extreme value distribution:
An E&P device is operated in temperature cycling conditions. Let us assume that the random amplitude of the induced stress, when a sin-gle cycle of the random amplitude is applied, is distributed in accordance with the Rayleigh law, so that the probability den-sity function of the random amplitude of the induced thermal stress is Here D x is the variance of the distribution. Let us assess the most likely extreme value of the stress amplitude for a large number n of cycles.
The probability distribution density function g(y n ) and the probability distribution function G(y n ) for the extreme value Y n of the stress amplitude are expressed as follows [28,47]: and respectively. Introducing the expression for the function G(y n ) into the expression for the function g(y n ), the following formula can be obtained for the probability density distribu-tion function g(y n ): ( ) where n n x y D ς = is the ratio of the sought amplitude af-ter the loading is applied n times to the standard deviation of the random response in question. The condition g'(y n ) = 0 results in the equation: If the number n is large, the second term in this expression This equation contains four unknowns: The stress-free activation energy U 0 , the leakage current sensitivity factor γ I , the relative humidity sensitivity factor γ H and the elevated voltage sensitivity factor γ V . These unknowns can be determined experimentally, by conducting a three-step FOAT.
At the first step one should conduct the test for two temperatures, T 1 and T 2 , while keeping the levels of the relative humidity H and the elevated voltage V unchanged. Assuming a certain level I * of the monitored/measured leakage current as the physically meaningful criterion of failure, recording during the FOAT the percentages P 1 and P 2 of non-failed samples, and using the above equation for the probability of non-failure, we obtain two equations for the probabilities of non-failure: where t 1 and t 2 are the testing times and T 1 and T 2 are the temperatures, at which the failures were observed. Since the numerators in these equations are the same, the following transcendental equation must be fulfilled: ln ln ln ln 0 This equation enables determining the leakage current sensitivity factor γ I. At the second step, testing at two humidity levels, H 1 and H 2 , should be conducted for the same temperature and voltage. This enables to determine the relative humidity sensitivity factor γ H . Similarly, the voltage sensitivity factor γ V can be determined, when testing is conducted at the third step at two voltage levels V 1 and V 2 . The stress-free activation energy U 0 can be then evaluated from the above expression for the probability P of non-failure for any consistent combination of the relative humidity, voltage, temperature and time as 0 * ln ln If, e.g., after t 1 = 35 h of accelerated testing at the temperature of T 1 = 60 °C = 333 K, voltage V = 600 V and the relative humidity of H = 0.85, 10% of specimens reached the critical level I * = 3.5 μA of the leakage current and, hence, failed, then the corresponding probability of non-failure is P 1 = 0.9; and if after t 2 = 70 h of testing at the temperature T 2 = 85 °C = 358 K at the same relative humidity and voltage levels, 20% of the tested samples failed, so that the probability of non-failure is P 2 = 0.8, then the factor γ I can be found from the equation Its solution is γ I = 4926 h -1 (μA) -1 , so that γ I I * = 17,241 h -1 . Tests at the second step are conducted for two relative humidity levels H 1 and H 2 while keeping the temperature and the voltage unchanged. Then the factor γ H can be found as: If, e.g., 5% of the tested specimens failed after t 1 = 40 h of testing at the relative humidity of H 1 = 0.5, at the voltage V = 600 V and at the temperature T = 60 °C = 333 K (P 1 = 0.95), and 10% of the specimens failed (P 2 = 0.9), after t 2 = 55 h of testing at this temperature, but at the relative humidity of H 2 = 0.85, then the above expression yields: γ H = 0.03292 eV. At the third step, when testing at two voltage levels V 1 = 600 V and V 2 = 1000 V is carried out for the same temperature-humidity bias at T = 85 °C = 358 K and H = 0.85, and 10% of the specimens failed after t 1 = 40 h (P 1 = 0.9), and 20% of the specimens failed after t 2 = 80 h of testing (P 2 = 0.8), then the factor γ V for the applied voltage and the predicted stress-free activation energy U 0 are as follows: No wonder that the third term in this equation plays the dominant role. It is noteworthy, however, that external loading may also have an effect on the "stress-free" activation energy. The author intends to investigate such a possibility as a future work.
Here U 0 is the activation energy and is the characteristic of the propensity of the solder material to fracture, W is the damage caused in the solder material by a single temperature cycle and measured, in accordance with Hall's concept [50][51][52][53], by a hysteresis loop area of a single temperature cycle for the strain of interest, T is the absolute temperature (say, the mean temperature of the cycle), n is the number of cycles, k is Boltzmann's constant, t is time, R, Ω is the measured (monitored) electrical resistance at the joint location, and γ is the sensitivity factor for the measured electrical resistance.
The above equation for the probability of non-failure makes physical sense. Indeed, this probability is "one" at the initial moment of time, when the electrical resistance of the solder joint structure is next-to-zero. This probability decreas-es with time because of the material aging and structural deg-radation, and even not necessarily only because of tempera-ture cycling leading to crack initiation and propagation. The probability of non-failure is lower for higher electrical resis-tance (a resistance as high as, say, 450 Ω, can be viewed as an indication of an irreversible mechanical failure of the joint). Materials with higher activation energy U 0 are characterized by higher fracture toughness and have a higher probability of non-failure. The increase in the number n of cycles leads to lower effective energy U = U 0 -nW, and so does the energy W of a single cycle ( Figure 1).
It could be shown (see Appendix B) that the maximum en-tropy of the above probability distribution takes place at the MTTF expressed as: Mechanical failure, because of temperature cycling, takes The activation energy U 0 in the above numerical exam-ple (with the rather tentative, but still realistic, input data) is about U 0 = 0.5 eV. This result is consistent with the existing reference information. This information (Bell Labs data) indi-cates that for failure mechanisms typical for semiconductor devices the stress-free activation energy ranges from 0.3 eV to 0.6 eV, for metallization defects and electro-migration in Al it is about 0.5 eV, for charge loss it is on the order of 0.6 eV, for Si junction defects is 0.8 eV. Other known activation energy values used in E&P reliability engineering assessments are more or less on the same order of magnitude. (See also http://nomtbf.com/2012/08/where-does-0-7ev-come-from).
With the above information, the following expression for the probability of non-failure can be obtained: If, e.g., t = 10 h, H = 0.20, V = 220 V, and the operation temperature is T = 70 °C = 343 K, then the probability of non-failure at these conditions is Clearly, the TTF is not an independent characteristic of the lifetime of a product, but depends on the predicted or speci-fied probability of its failure. If this probability is high, the life-time of the product is short, and vice versa, if the probability of non-failure is low, the corresponding lifetime is long.

Predicted lifetime of SJIs: Application of Hall's concept
Using the BAZ model (see Appendix B), the probability of non-failure of the SJI experiencing inelastic strains during temperature cycling [48][49][50][51][52][53] can be sought as  temperature cycling, which is, as is well known, costly, timeand labor-consuming and often even misleading accelerated testing approach. This is because the temperature range in accelerated temperature cycling has to be substantially wid-er than what the material will most likely encounter in actu-al use conditions, and properties of E&P materials are, as is known, temperature sensitive.
As long as inelastic deformations take place, it is assumed that it is these deformations (which typically occur at the pe-ripheral portions of the soldered assembly, where the inter-facial stresses are the highest) determine the fatigue lifetime of the solder material, and therefore the state of stress in the elastic mid-portion of the assembly does not have to be accounted for. The roles of the size and stiffness of this mid-portion have to be considered, however, when determining the very existence and establishing the size of the inelastic zones at the peripheral portions of the soldered assemblies. Although the detailed numerical example has been carried out for a ball-grid-array (BGA) design, it is applicable also to highly popular today column-grid-array (CGA) and quad-flat-nolead (QFN) designs, as well as to, actually, any packaging design. It is noteworthy in this connection that it is much easier to avoid inelastic strains in CGA and QFN structures than in the actually tested BGA design.
The random vibrations were considered in the developed methodology as a white noise of the given ratio of the acceleration amplitudes squared to the vibration frequency. Testing was carried out for two PCBs, with surface-mounted packages on them, at the same level (with the mean value of 50 g) of three-dimensional random vibrations. One board was subjected to the low temperature of -20 °C and another oneto -100 °C. It has been found, by preliminary calculations that the solder joints at -20 °C will still perform within the elastic range, while the solder joints at -100 °C will experience inelastic strains. No failures were detected in the joints of the board tested at -20 °C, while the joints of the board tested at -100 °C failed after several hours of testing.
Predicted "static fatigue" lifetime of an optical silica fiber BAZ equation can be effectively employed as an attractive replacement of the widely used today purely empirical power law relationship for assessing the "static fatigue" (delayed fracture) lifetime of optical silica fibers [41]. The literature dedicated to delayed fracture of ceramic and silica materials, mostly experimental, is enormous. In the analysis below the combined action of tensile loading and an elevated temperature is considered.
Let, e.g., the following input information is obtained at the FOAT first step for a polyimide coated fiber intended for elevated temperature operations: 1) After t 1 = 10 h of testing at the temperature of T 1 = 300 °C = 573 °K, under the stress of σ = 420 kg/mm 2 , 10% of the tested specimens failed, so that the probability of non-failure is P 1 = 0.9; 2) After t 2 = 8.0 h of testing at the temperature of T 2 = 350 °C = 623 °K under the same stress, 25% of the tested samples failed, so that the probability of non-failure is P 2 = 0.75. Forming the equation for the place, when the number n of cycles is When failure occurs, the temperature in the denominator in the parenthesis in the equation for the MTTF τ becomes irrelevant. In this case the measured probability of non-failure for the situation, when failure takes place, is exp . Here is the MTTF. If, e.g., 20 specimens were temperature cycled and the high resistance R f = 450 Ω considered as an indication of material's failure, was detected in 75 of them, then the probability of non-failure is P f = 0.25. If the number of cycles during such FOAT was, e.g., n f = 2000, and each cycle lasted, say, for 20 min =1200 sec., then the predicted time-to-failure TTF is t f = 2000 × 1200 = 24 × 10 5 sec, the sensitivity factor γ for the electrical resistance is According to Hall's concept [51][52][53][54] the energy of a single cycle should be evaluated by running a special test, in which appropriate strain gages should be used. Let, e.g., in these tests the area of the hysteresis loop of a single cycle was W = 2.5 × 10 -4 eV. Then the stress-free activation energy of the solder material is U 0 = n f W = 2000 × 2.5 × 10 -4 = 0.5 eV. In order to assess the number of cycles to failure in actual operation conditions one could assume that the temperature range in these conditions is, say, half the accelerated test range, and that the area W of the hysteresis loop is proportional to the temperature range. Then the number of cycles to failure is

Accelerated testing based on temperature cycling should be replaced
It is well known that it is the combination of low temperatures and repetitive dynamic loading that accelerate dramatically the propagation of fatigue cracks, whether elastic or inelastic. A modification of the BAZ model is suggested [48,49] for the evaluation of the lifetime of SJIs experiencing inelastic strains. The experimental basis of the approach is FOAT. The test specimens were subjected to the combined action of low temperatures (not to elevated temperatures, as in the classical Arrhenius model) and random vibrations with the given input energy spectrum of the "white noise" type. The methodology suggested and employed in [48,49] is viewed as a possible, effective and attractive alternative to probability of non-failure in accordance with the BAZ equation and introducing notations can be obtained for the time sencitivity factor γ t . With the above input data we obtain: At the second step testing has been conducted at the stresses of σ 1 = 420 kg/mm 2 and σ 2 = 320 kg/mm 2 at T = 350 °C = 623 °K and it has been confirmed that 10% of the tested samples under the stress of σ 1 = 420 kg/mm 2 failed after t 1 = 10.0 h of testing, so that P 1 = 0.9. The percentage of failed samples tested at the stress level of σ 2 = 320 kg/mm 2 was 5% after t 2 = 24 h of testing, so that P 2 = 0.95. Then the ratio kT σ γ of the sensitivity factor γ σ to the thermal energy kT is After the sensitivity factors γ c and γ σ for the time and for the stress are determined, the expression for the ratio of the stressfree activation energy to the thermal energy can be found from the BAZ formula for the probability of non-failure as 5 0 ln ln ln 0.0122761 ln 2.1595 10 .
If, e.g., the stress σ = σ 1 = 320 kg/mm 2 is applied for t = 24 h and the acceptable probability of non-failure is, say, P = 0.99 then This result indicates that the activation energy U 0 is determined primarily, as has been expected, by the property of the silica material (second term), but is affected also, to a lesser extent, by the level of the applied stress. The fatigue lifetime, i.e. TTF, can be determined for the acceptable (specified) probability of non-failure as This formula indicates that when the probability of non-failure is low, the expected lifetime (RUL) could be significant. If, e.g., the applied temperature and stress are T = 325 °C = 598 K, and 5.0 kg/mm 2 , and the acceptable (specified) probability of non-failure is P = 0.8, then the predicted TTF is If, however, the acceptable probability of non-failure is considerably higher, say, P = 0.99, then the fiber's lifetime is much shorter, only BIT of E&P Products: To BIT or Not to BIT, That's the Question "We see that the theory of probability is at heart only common sense reduced to calculations: it makes us appreciate with exactitude what reasonable minds feel by a sort of instincts, often without being able to account for it." Pierre-Simon, Marquis de Laplace, French mathematician and astronomer BIT [54][55][56][57][58] is an accepted practice in E&P manufacturing for detecting and eliminating early failures ("freaks") in newly fabricated electronic products prior to shipping the "healthy" ones that survived BIT to the customer(s). BIT can be based on temperature cycling, elevated temperatures, voltage, current, humidity, random vibrations, etc., and/or, since the principle of superposition does not work in reliability engineering, -on the appropriate combination of these stressors. BIT is a costly undertaking: early failures are avoided and the infant mortality portion (IMP) of the bathtub curve (BTC) is supposedly eliminated at the expense of the reduced yield. But what is even worse, is that the elevated BIT stressors might not only eliminate "freaks," but could cause permanent damage to the main population of the "healthy" products. This kind of testing should be therefore well understood, thoroughly planned and carefully executed. It is unclear, however, whether BIT is always needed ("to BIT or not to BIT: that's the question"), or to what extent the current practices are adequate and effective.
HALT that is currently employed as a BIT vehicle and, as has been indicated above, is a "black box" that tries "to kill many birds with one stone". HALT is unable therefore to provide any trustworthy information on what this testing does. It remains even unclear what is actually happening during, and as a result of, the HALT-based BIT and how to effectively eliminate "freaks," while minimizing the testing time, reducing BIT cost and avoiding damaging the sound devices. When HALT is relied upon to do the BIT job, it is not even easy to determine whether there exists a decreasing failure rate with time. There is, therefore, an obvious incentive to develop ways, in which the BIT process could be better understood, trustworthily quantified, effectively monitored and possibly even optimized.
Accordingly, in this section some important BIT aspects are addressed for a packaged E&P product comprised of numerous mass-produced components. We intend to shed some quantitative light on the BIT process, and, since nothing is perfect (as has been indicated, the difference between a highly reliable process or a product and an insufficiently reliable one is "merely" in the levels of their never-zero probability of failure), such a quantification should be done on the probabilistic basis. Particularly, we intend to come up with a suitable criterion to answer the fundamental "to BIT or not to BIT" question, and, in addition, if BIT is decided upon, -to find a way to quantify its outcome using our physically meaningful and flexible BAZ model.
In the analysis below the role and significance of the following important factors that affect the testing time and the stress level are addressed: the random statistical failure rate (SFR) of mass-produced components that the product of interest is comprised of; the way to assess, from the highly focused and highly cost-effective FOAT, the activation energy of the "freak" population of the manufacturing technology of interest; the role of the applied stressor(s); and, most importantly, the probabilities of the "freak" failures depending on the duration of the BIT loading, and a way to assess, using BAZ equation, these probabilities as functions of the duration and level of the BIT, as well as, as will be shown, the variance of the random SFR of the mass-produced components that the product of interest is comprised of. It is shown that the BTC based time-derivative of the failure rate at the initial moment of time (at the beginning of the IMP portion of the BTC) can be considered as a suitable criterion of whether BIT for a packaged IC device should be or does not have to be conducted. It is shown also that this criterion is, in effect, the variance of the random SFR of the mass-produced components that the manufacturer of the given product received from numerous vendors, whose commitments to the reliability of their mass-produced components are unknown, and therefore the random SFR of these components might vary significantly, from zero to infinity. Based on the developed general formula for the non-random SFR of a product comprised of such components, the solution for the case of normally distributed random SFR of the constituent components has been obtained. This information enables answering the "to BIT or not to BIT" question in electronics manufacturing. If BIT is decided upon, BAZ model can be employed for the assessment of its required duration and level. Our analyses have to do with the role and significance of important factors that affect the testing time and stress level: the random SFR of mass-produced components that the product of interest is comprised of; the way to assess, from the highly focused and highly cost effective FOAT, the activation energy of the "freak" population; the role of the applied stressor(s); and, most importantly, -the probabilities of the "freak" failures depending on the duration of the BIT effort. These factors should be considered when there is an intent to quantify and, eventually, to optimize the BIT's procedure. This fundamental question is addressed using two mutually complementary and independent analyses: 1) The analysis of the configuration of the IMP of a BTC obtained for a more or less well established manufacturing technology of interest; and 2) The analysis of the role of the random SFR of the mass-produced components that the product of interest is comprised of.
i.e., the above formula yields: The "time function" φ[τ(t)] depends on the dimension- value, known in the probabilistic reliability theory as safety factor, can be interpreted as a measure of the degree of uncertainty of the random SFR. The time derivative with respect to the actual (real) time λ′ ST (t) is It can be shown that the derivative φ′(τ) at the initial moment of time (t = 0) is equal to -1.0, so that This result explains the physical meaning of this derivative: it is the variance (with a "minus" sign, of course) of the random SFR of the constituent components.
As to the use of the kinetic BAZ model in the problem in question, it suggests a simple, easy-to-use, highly flexible and physically meaningful way to evaluate of the probability of failure of a material or a device after the given time in testing or operation at the given temperature and under the given stress or stressors. Using this model, the probability of non-failure during the BIT can be sought as Here D is the variance of the random SFR of the mass-produced components, I is the measured/monitored signal (such as, e.g., leakage current, whose agreed-upon high value I * is considered as an indication of failure; or an elevated electrical resistance, particularly suitable for solder joint interconnections), t is time, σ is the "external" stressor, U 0 is the activation energy (unlike in the original BAZ model, this energy may or may not be affected by the level of the external stressor), T is the absolute temperature, γ σ is the stress sensitivity factor for the applied stress and γ t is the time/variance sensitivity factor. The above distribution makes physical sense. Indeed, the probability P of non-failure decreases with an increase in the variance D, in the time t, in the level I * of the leakage current at failure and in the temperature T, and increases with an increase in the activation energy U 0 . As has been shown, the maxima of the entropy and the probability of non-failure take place at the moment of time Accepted in the BAZ model as the MTTF. There are three unknowns in this expression: the product ρ = γ t D of the time related stress-sensitivity factor γ σ and the variance D, and the The desirable steady-state portion of the BTC commences at the BIT's end as a result of the interaction of two major irreversible time-dependent processes: The "favorable" statistical process that results in a decreasing failure rate with time, and the "unfavorable" physics-of-failure-related process resulting in an increasing failure rate. The first process dominates at the IMP of the BTC and is considered here. The IMP of a typical BTC, the "reliability passport" of a mass-produced electronic product using a more or less well established manufacturing technology, can be approximated as Here λ 0 is BTC's steady-state ordinate, λ 1 is its initial (highest) value at the beginning of the IMP, t 1 is the IMP duration, the exponent n 1 is , and β 1 is the fullness of the BTC's IMP. This fullness is defined as the ratio of the area below the BTC to the area (λ 0 -λ 1 )t 1 of the corresponding rectangular. The exponent n 1 changes from zero to one, when the β 1 changes from zero to 0.5. The time derivative of the failure rate at the IMP's initial moment of time (t = 0) is ( ) If this derivative is zero or next-to-zero, this means that the IMP of the BTC is parallel to the time axis (so that there is, in effect, no IMP at all), that no BIT is needed to eliminate this portion, and "not to burn-in" is the answer to the basic question: the initial value λ 1 of the BTC is not different from its steady-state λ 0 value. What is less obvious is that the same result takes place for 1 1 0 t β = . This means that although the BIT is needed, the testing could be short and low level, because there are not too many "freaks" in the manufactured population and because, although these "freaks" exist, they are characterized by very low probabilities of non-failure, so that the planned BIT process could be a next-to-an-instantaneous one. The maximum value of the fullness β 1 is β 1 = 0. This corresponds to the case when the IMP of the BTC is a straight line connecting the initial, λ 1 , and the steady-state, λ 0 , BTC ordinates. The derivative λ′(0) is In this case, and this seems to be the case, when the BIT is mostly needed. It has been found that the expression for the non-random time dependent SFR Can be obtained from the probability density distribution function f(t) for the random SFR λ for the components obtained from the vendors. When this rate is normally distributed, activation energy U 0 . These unknowns, as has been demonstrated in previous examples, could be determined from a two-step FOAT. At the first step testing should be carried out for two temperatures, T 1 and T 2 , but for the same effective activation energy U = U 0 -γ σ σ. Then the relationships For the measured probabilities of non-failure can be obtained. Here t 1,2 are the corresponding times at which the failures have been detected and I * is the agreed upon the leakage current at failure. Since the numerator U = U 0 -γ σ σ in these relationships is kept the same in the conducted tests, the amount ρ = γ t D can be found as Where the notations The second step of testing is aimed at the evaluation of the stress sensitivity factor γ σ and should be conducted at two stress levels, σ 1 and σ 2 (say, temperatures or voltages). If the stresses σ 1 and σ 2 are thermal stresses determined for the temperatures T 1 and T 2 , they could be evaluated using a suitable stress model. Then If, however, the external stress is not a thermal stress, then the temperatures at the second step tests should preferably be kept the same. Then the ρ value will not affect the factor γ σ , which could be found as Where T is the testing temperature. Finally, after the product ρ and the factor γ σ are determined, the activation energy U 0 can be determined as The TTF can be obviously determined as TTF = MTTF(-lnP), where the MTTF has been defined above. Let, e.g., the following data were obtained at the first step of FOAT: 1) After t 1 = 14 h of testing at the temperature of T 1 = 60 °C = 333° K, 90% of the tested devices reached the critical level of the leakage current of I * = 3.5 μA and, hence, failed, so that the recorded probability of non-failure is P 1 = 0.1; the applied stress is elevated voltage σ 1 = 380 V; 2) After t 2 = 28 h of testing at the temperature of T 2 = 85 °C = 358° K, 95% of the samples failed, so that the recorded probability of non-failure is P 2 = 0.05. The applied stress is still elevated voltage σ 1 = 380 V. Then the parameters At the second step of FOAT one can use, without conducting additional testing, the above information from the first step, its duration and outcome, and let the second step of testing has shown that after t 2 = 36 h of testing at the same temperature of T = 60 °C = 333° K, 98% of the tested samples failed, so that the predicted probability of non-failure is P 2 = 0.02. If the stress σ 2 is the elevated voltage σ 2 = 220 V, then the parameter n 2 becomes The zero-stress activation energies calculated for the above parameters n 1 and n 2 and the stresses σ 1 and σ 2 is To make sure that there was no calculation error, the zero-stress activation energy can be found also as No wonder that these values are considerably lower than the activation energies of "healthy" products. Many manufacturers consider as a sort of "rule of thumb" that the level of 0.7eV can be used as an appropriate tentative number for the activation energy of healthy electronic products. In this connection it should be indicated that when the BIT process is monitored and the activation energy U 0 is being continuously calculated based on the number of the failed devices, the BIT process should be terminated, when the calculations, based on the observed and recorded FOAT data, indicate that the stress-free activation energy U 0 starts to increase. The MTTF can be computed as The TTF, however, depends on the probability of non-failure. Its values calculated as TTF = MTTF × (lnP) are shown in Table 2.
Clearly, the probabilities of non-failure for successful BITs should be low enough. It is clear also that the BIT process should be terminated when the calculated probabilities of non-failure and the activation energy U 0 start rapidly increasing. Although our BIT analyses do not suggest any straightforward and complete way of how to optimize BIT, they nonetheless shed useful and insightful light on the significance of some important factors that affect the BIT's need, and, if decided upon, -its required time and stress level for a packaged product comprised of mass-produced components.

Adequate Trust is an Important HCF Constituent
"If a man will begin with certainties he will end with doubts; but if he will be content to begin with doubts, he shall end in certainties".
Francis Bacon, English philosopher and statesman, 'The Advancement of Learning' Since Shakespearian "love all, trust a few" and "don't trust the person who has broken faith once" and to the today's ladygaga's "trust is like a mirror, you can fix it if it's broken, but you can still see the crack in that mother f*cker's reflection", the importance of human-human trust was addressed by numerous writers, politicians and psychologists in connection with the role of the human factor in making a particular engineering undertaking successful and safe [59][60][61][62][63][64][65][66]. It was the 19 th century South Dakota politician and clergyman Frank Craine who seems to be the first who indicated the importance of an adequate trust in human relationships. Here are a couple of his quotes: "You may be deceived if you trust too much, but you will live in torment unless you trust enough"; "We're never so vulnerable than when we trust someone -but paradoxically, if we cannot trust, neither can we find love or joy"; "Great companies that build an enduring brand have an emotional relationship with customers that has no barrier. And that emotional relationship is on the most important characteristic, which is trust". Hoff and Bashir [61] considered the role of trust in automation. Madhavan and Wiegmann [62] drew attention at the importance of trust in engineering and, particularly, at similarities and differences between human-human and human-automation trust. Rosenfeld and Kraus [63] addressed human decision making and its consequences, with consideration of the role of trust. Chatzi, Wayne, Bates and Murray [64] provided a comprehensive review of trust considerations in aviation maintenance practice. The analysis in this section [65] is, in a way, an extension and a generalization of the recent Kaindl and Svetinovic [66] publication, and addresses some important aspects of the human-in-the-loop (HITL) problem for safety-critical missions and extraordinary situations, as well as in engineering technologies. It is argued that the role and significance of trust can and should be quantified when preparing such missions. The author is convinced that otherwise the concept of an adequate trust simply cannot be effectively addressed and included into an engineering technology, design methodology or a human activity, when there is a need to assure a successful and safe outcome of a particular engineering undertaking or an aerospace or like Captain Sullenberger, and not someone like a pilot with a regular HCF, turned out behind the wheel in such a situation.
As far as the quality of an adequate trust is concerned, Captain Sullenberger certainly "avoided over-trust" in the ability of the first officer, who ran the aircraft when it took off La Guardia airport, to successfully cope with the situation, when the aircraft struck a flock of Canada Geese and lost engine power. Captain Sullenberger took over the controls, while the first officer began going through the emergency procedures checklist in an attempt to find information on how to restart the engines and what to do, with the help of the air traffic controllers at LaGuardia and Teterboro airports, to bring the aircraft to these airports and hopefully to land it there safely. What is even more important, is that Captain Sullenberger also effectively and successfully "avoided under-trust" in his own skills, abilities and extensive experience that would enable him to successfully cope with the situation: 57-yearold Captain Sully was a former fighter pilot, a safety expert, a professional development instructor and a glider pilot. That was the rare case when "team work" (such as, say, sharing his "wisdom" and intent with flight controllers at LaGuardia and Teterboro) was not the right thing to pursue until the very moment of ditching. Captain Sully had trust in the aircraft structure that would be able to successfully withstand the slam of the water during ditching and, in addition, would enable slow enough flooding after ditching. It turned out that the crew did not activate the "ditch switch" during the incident, but Capt. Sullenberger later noted that it probably would not have been effective anyway, since the water impact tore holes in the plane's fuselage that were much larger than the openings sealed by the switch. Captain Sully had trust in the aircraft safety equipment that was carried in excess of that mandated for the flight. He also had trust in the outstanding cooperation and excellent cockpit resource management among the flight crew who trusted their captain and exhibited outstanding team work (that is where such work was needed, was useful and successful) during landing and the rescue operation. The area where the aircraft landed was the one, where fast response from and effective help of the various ferry operators located near the USS Intrepid ship/museum, and the ability of the rescue team to provide timely and effective help was the one that Capt. "Sully" could expect and rely upon, and he actually did. The environmental conditions and, particularly, the visibility was excellent and was an important contributing factor to the survivability of the accident. All these trust related factors played an important role in Captain Sullenberger's ability to successfully ditch the aircraft and save lives. As is known, the crew was later awarded the Master's Medal of the Guild of Air Pilots and Air Navigators for successful "emergency ditching and evacuation, with the loss of no lives… a heroic and unique aviation achievement…the most successful ditching in aviation history. "National Transportation Safety Board (NTSB) Member Kitty Higgins, the principal spokesperson for the on-scene investigation, said at a press conference the day after the accident that it" has to go down [as] the most successful ditching in aviation history… These people knew what they were supposed to do and they did it and as a result, nobody lost their life". The flight crew, and, first of all, Captain Sullenberger, were widely praised for their actions a military mission. Since nobody and nothing is perfect, and the probability-of-failure is never zero, such a quantification should be done on the probabilistic basis. Adequate trust is an important human quality and a critical constituent of the human capacity factor (HCF) [67][68][69][70]. When evaluating the outcome of a HITL related mission or an off-normal situation, the role of the HCF should always be considered and even quantified vs. the level of the mental workload (MWL). While the notion of the MWL is well established in aerospace and other areas of human psychology and is reasonably well understood and investigated (see, e.g., [71][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89]), the importance of the HCF has been emphasized by the author of this paper and introduced only several years ago. The rationale behind such an introduction is that it is not the absolute MWL level, but the relative levels of the MWL and HCF that determine, in addition to other critical factors, the probability of the human non-failure in a particular off-normal situation of interest. The majority of pilots with an ordinary HCF would most likely have failed in the "miracle-on-the-Hudson" situation, while "Sully", with his extraordinarily high anticipated HCF, has not. HCF includes, but might not be limited to, the following human qualities that enable a professional to successfully cope, when necessary, with an elevated off-normal MWL: Age, fitness, health; personality type; psychological suitability for a particular task; professional experience , qualifications, and intelligence; education, both special and general; relevant capabilities and skills; level, quality and timeliness of training; performance sustainability (consistency, predictability); independent thinking and independent acting, when necessary; ability to concentrate; ability to anticipate; ability to withstand fatigue in general and, when driving a car, drowsiness (this ability might be considerably different depending on whether it is "old fashioned" manual or automated driving (AD) [90]; self control and ability to "act in cold blood" in hazardous and even life threatening situations; mature (realistic) thinking; ability to operate effectively under time pressure; ability to operate effectively, when necessary, in a tireless fashion, for a long period of time (tolerance to stress); ability to make well substantiated decisions in a short period of time; team-player attitude, when necessary; ability and willingness to follow orders, when necessary; swiftness in reaction, when necessary; adequate trust; and ability to maintain the optimal level of physiological arousal. These and other qualities are certainly of different importance in different HITL situations.
HCF could be time-dependent.
It is clear that different individuals possess these qualities in different degrees. Captain Chesley Sullenberger ("Sully"), the hero of the famous miracle-on-the-Hudson event did indeed possess an outstanding HCF. As a matter of fact the "miracle" was not that he managed to ditch the aircraft successfully in an extraordinary situation, but that an individual someone else's authority or expertise ("not invented here (NIH)" syndrome, which is typical for big organizations or corporations); when the probability of the trustee's non-failure is one, that means that there is an extreme over-trust in the trustees technology and/or leadership abilities: "my neighbor's grass is always greener" and "no man is a prophet in his own land". This is, as is known, typical for small companies or organizations.
The role of the human factor (HF) in various, mostly aerospace, missions and situations, was addressed in numerous publications (see, e.g., ). When PPM analyses are conducted with an intent to assess the probability of non-failure, considering the role of the HCF vs. his/her MWL, a suitable model is DEPDF based one. This model is similar to the BAZ model, which also leads to a double-exponential relationship, but does not contain temperature as an important parameter affecting the TTF. Like in the BAZ model, the necessary parameters of the DEPDF model can be obtained for the given HCF and MWL from the appropriately designed and conducted FOAT.
Let us show how this could be done, using as an example, the role of the HF in aviation. Flight simulator could be employed as an appropriate FOAT vehicle to quantify, on the probabilistic basis, the required level of the HCF with respect to the expected MWL when fulfilling a particular mission. When designing and conducting FOAT aimed at the evaluation of the sensitivity parameter γ in the distribution for the probability of non-failure, a certain MWL factor I (electro-cardiac activity, respiration, skin-based measures, blood pressure, ocular measurements, brain measures, etc.) should be monitored and measured on the continuous basis until its agreed-upon high value I * , viewed as an indication of a human failure, is reached. Then the above DEPDF distribution for the probability of non-failure could be written as Bringing together a group of more or less equally and highly qualified individuals, one should proceed from the fact that the HCF is a characteristic that remains more or less unchanged for these individuals during the relatively short time of the FOAT. The MWL, on the other hand, is a short-term characteristic that can be tailored, in many ways, depending on the anticipated MWL conditions. From the above expression we have: Let the FOAT is conducted at two MWL levels, G 1 and G 2 , and the criterion I * was observed and recorded at the times of t 1 and t 2 for the established (observed, recorded) percentages of Q 1 = 1 -P 1 and Q 2 = 1 -P 2 , respectively. Then the condition for the HCF F that should remain unchanged enables to obtain the following formula for the sensitivity factor γ: during the incident, notably by New York City Mayor (Michael Bloomberg at that time) and New York State Governor David Paterson, who opined, "We had a Miracle on 34th Street. I believe now we have had a Miracle on the Hudson." Outgoing U.S. President George W. Bush said he was "inspired by the skill and heroism of the flight crew", and he also praised the emergency responders and volunteers. Then President-elect Barack Obama said that everyone was proud of Sullenberger's "heroic and graceful job in landing the damaged aircraft", and thanked the A320's crew.
The double-exponential probability density function (DE-PDF) [70] for the random HCF has been revisited in the addressed adequate trust problem with an intent to show that the entropy of this distribution, when applied to the trustee, can be viewed as an appropriate quantitative characteristic of the propensity of a human to make a decision influenced by an under-trust or an over-trust. DEPDF's entropy for the human non-failure sheds quantitative light on why under-trust and over-trust should be avoided. A suitable modification of the DEPDF for the human non-failure, whether it is the performer (decision maker) or the trustee, could be assumed in the following simple form Where P is the probability of non-failure, t is time, F is the HCF, G is the MWL, and γ is the sensitivity factor for the time.
The expression for the probability of non-failure P makes physical sense. Indeed, the probability P of human non-failure, when fulfilling a certain task, decreases with an increase in time and increases with an increase in the ratio of the HCF to the mental workload (MWL). At the initial moment of time (t = 0) the probability of non-failure is P = 1 and exponentially decreases with time, especially for low F/G ratios. For very large HCF-to-the-MWL ratios the probability P of non-failure is also significant, even for not-very short operation times. The above expression, depending on a particular task and application, could be applied either to the performer (the decision maker) or to the trustee. The trustee could be a human, a technology, a concept, an existing best practice, etc.
The ergonomics underlying the above distribution could be seen from the time derivative = -PlnP is the entropy of this distribution. The formula for the time derivative of the probability of non-failure indicates that the above DEPDF reflects an assumption that the time derivative of the probability of non-failure is proportional to the entropy of this distribution and decreases with an increase in time. As to the expression for the DEPDF, it sheds useful quantitative light on the Ref. [67] recommendation that both under-trust and over-trust should be avoided. The entropy H(P), when applied to the above distribution and viewed in this case as the probability of non-failure of the trustee's performance, is zero for both extreme values of this performance: When the probability of the trustee's non-failure is zero, it should be interpreted as an extreme under-trust in The HCF of the individuals that underwent the accelerated testing can be determined as: Let, e.g., the same group of individuals was tested at two different MWL levels, G 1 and G 2 , until failure (whatever its definition and nature might be), and let the MWL ratio was 2 Because of that the TTF was considerably shorter and the number of the failed individuals was considerably larger, for the same I * level (say, I * = 120) in the second round of tests. Let, e.g., the probabilities of non-failure and the corresponding times are P 1 = 0.8, P 2 = 0.5, t 1 = 2.0 h and t 2 = 1.5 h. Then the ratios n 1,2 are The calculated required HCF-to-MWL ratios ln ln 62.7038 for different probabilities of non-failure and for different times are shown in Table 3.
As evident from the calculated data, the level of the HCF in this example should exceed considerably the level of the MWL, so that a high enough value of the probability of human-non-failure is achieved, especially for long operation times. It is concluded that trust is an important HCF quality and should be included into the list of such qualities for a particular "humanin-the-loop" task. The HCF should be evaluated vs. MWL, when there is a need to assure a successful and safe outcome of a particular aerospace or military mission, or when considering the role of a HF in a non-vehicular engineering system. The DE-PDF for the random HCF is revisited, and it is shown particularly that its entropy can be viewed as an appropriate quantitative characteristic of the propensity of a human to an under-trust or an over-trust judgment and, as the consequence of that, to an erroneous decision making or to a performance error.

PPM of an Emergency-Stopping Situation in AD or on a RR
"Education is man's going forward from cocksure ignorance to thoughtful uncertainty."

Kenneth G. Johnson, American high-school English teacher
Automotive engineering is entering a new frontier-the AD era [91][92][93][94][95][96][97][98]. Level 3 of driving automation, conditional automation, as defined by SAE [96], considers a vehicle controlled autonomously by the system, but only under 'specific conditions'. These conditions include speed control, steering, and braking, as well as monitoring the environment. When/if, however, such conditions are no more met, and monitoring the environment determines unexpected or uncontrollable situation, the system is supposed to hand over control to the human operator. The new AD frontier requires, on one hand, the development of advanced navigation equipment and instrumentation, and, first of all, an effective and reliable AD system itself, but also numerous cameras, radars, LiDARs ("optical radars") and other electro-optic means with fast and effective processing capabilities. In addition, special qualifications and attitudes are required of the key HITL "component" of the system -the driver. It is he/ she who is ultimately responsible for the vehicle and passengers safety, and should effectively interact with the system on a permanent basis. It is imperative that the driver of an AD vehicle receives special training before operating such vehicle, and this requirement should be reflected in his/hers driver license.
The pre-deceleration time (that includes decision-making time, pre-braking time and to some extent also brake-adjusting time) and the corresponding distance (σ 0 ) characterize, in the extraordinary situation in question, when compared to the deceleration time and distance (σ 1 ), the role of the HCF. Indeed, if this factor is large (the driver reacts fast and effectively), the ratio 1 0 σ η σ = is significant. It is also noteworthy that the successful outcome of an extraordinary AD related situation depends also on the level of trust of the human driver towards the system and the system's user-friendly and failure-free performance. Adequate trust should be viewed therefore an important HCF in making AD sufficiently safe. The more or less detailed evaluation of the role of the drivers trust towards the AD system performance is, however, beyond the scope of this analysis and is considered as future work. We would like to indicate also that the overall distance of the trip and the driver's fatigue and state-of-health might have a significant effect on his/her alertness. This circumstance should also be considered and possibly quantified. This effort is also considered, however, as future work.
When a deterministic approach is used to quantify the role of the major factors affecting the safety of an outcome of a possible collision situation, when an obstacle is suddenly detected in front of the moving vehicle, the role of the HF could be quantified by the ratio , where S 0 is the pre-deceleration distan, S 1 is the deceleration distance, and S = S 0 + S 1 is the stopping distance. The factor HF changes from one to zero, when the distance S 0 that characterizes the human performance changes from zero (exceptionally high performance) to a large number (low performance). As has been indicated, special training might be necessary to make the human performance adequate for a particular AD system and vehicle type, and the relevant information should be even included into the driver's driver license.
Pre-deceleration time that is characterized by the constant speed of the vehicle includes: 1) Decision-making time, i.e., the time that the system and/or the driver need to decide that/if the driver has to intervene and to take over the control of the vehicle; 2) Pre-braking time that the driver needs to make his/her decision on pushing the brakes and, 3) Brake-adjusting time needed to adjust the brakes, when interacting with the vehicle's anti-lock (anti-skid) braking system; although both the human and the vehicle performance affect this third period of time and the corresponding distance, it can be conservatively assumed that the brake-adjusting time is simply part of the pre-deceleration time. Thus, two major critical periods could be distinguished in an approximate PPM of a possible collision situation: 1) The pre-deceleration time counted from the moment of time, when the steadfast obstacle was detected, until the time when the vehicle starts to decelerate. This time depends on driver experience, age, fatigue and other relevant items of his/her HCF. It could be assumed that during this time the vehicle keeps moving with its initial speed V 0 and that it is this time that characterizes the performance of the driver. If, e.g., While one has to admit that at present "we do not even know what we do not know" [91] about the challenges and pitfalls associated with the use of AD systems, we do know, however, that the HITL role will hardly change in the foreseeable future, when more advanced AD equipment will be developed and installed. What is also clear is that the safe outcome of an off-normal AD related situation could not be assured, if it is not quantified, and that, because of various inevitable unpredictable intervening uncertainties, such quantification should be done on the probabilistic basis. In effect, the difference between a highly reliable and an insufficiently reliable performance of a system or a human is "merely" the difference in the never-zero probabilities of their failure. Accordingly, PAM is employed in this analysis to predict the likelihood of a possible collision, when the system and/or the driver (the significance of this important distinction has still to be determined and decided upon [98]) suddenly detect a steadfast obstacle, and when the only way to avoid collision is to decelerate the vehicle using brakes. We would like to emphasize that PPM should always be considered to complement computer simulations in various HITL and AD related problems. These two modeling approaches are usually based on different assumptions and use different evaluation techniques, and if the results obtained using these two different approaches are in a reasonably good agreement, then there is a reason to believe that the obtained data are sufficiently accurate and trustworthy.
It has been demonstrated, mostly in application to the aerospace domain, how PPM could be effectively employed, when the reliability of the equipment (instrumentation), both its hard-and software, and human performance contribute jointly to the outcome of a vehicular mission or an extraordinary situation. One of the developed models, the convolution model, is brought here "down to earth", i.e., extended, with appropriate modifications, for the AD situation, when there is a need to avoid collision. The automotive vehicle environment might be much less forgiving than for an aircraft: While slight deviations in aircraft altitude, speed, or human actions are often tolerable without immediate consequences, a motor vehicle is likely to have much tighter control requirements for avoiding collision than an aircraft. We would like to point out also that the driver of an AD vehicle should possess special "professional" qualities associated with his/her need to interact with an AD system. These qualities should be much higher and more specific than the today's amateur driver possesses. In reality none of the above times and the corresponding distances are known, or could be, or even will ever be evaluated, with sufficient certainty, and there is an obvious incentive therefore that a probabilistic approach is employed to assess the likelihood of an accident. To some extent, our predictive model is similar to the convolution model applied in the helicopter-landing-ship situation [85], where, however, random times, and not random distances, were considered. If the probability ( ) P S S  that the random sum S = S 0 + S 1 of the two random distances S 0 and S 1 is larger than the anticipated sight distance (ASD) Ŝ to the obstacle determined by the system for the moment of time when the obstacle was detected, is sufficiently low, then there is a good chance and a good reason to believe that collision will be avoided.
It is natural to assume that the random times T 0 and T 1 , corresponding to the distances S 0 and S 1 , are distributed in accordance with the Rayleigh law. Indeed, both these times cannot be zero, but cannot be very long either. In addition, in an emergency situation, short time values are more likely than long time values, and because of that, their probability density distribution functions should be heavily skewed in the direction of short times. The Rayleigh distribution in possesses these physically important properties and is accepted in our analysis. The probability P S that the sum s = s 0 + s 1 of the random variables S 0 and S 1 exceeds a certain level Ŝ is expressed by the distribution (A-1) in the Appendix A, and the computed probabilities P S of collision are shown in Table 4. The calculated data indicate particularly that the probability of collision for the input data used in the above deterministic example, where the pre-deceleration distance was σ 0 = S 0 = 30 m, the deceleration distance was σ 1 = S 1 = 25 m, and the dimensionless parameters were , is as high as 0.6320.
As evident from Table 4, the probability of collision will be considerably lower for larger available distances Ŝ. The calculated data clearly indicate that the available distance plays the vehicle's initial speed is V 0 = 10 m/s nd the pre-deceleration time is T 0 = 3.0 s, then the corresponding distance is as follows: S 0 = V 0 T 0 = 30 m; and 2) The deceleration time that can be evaluated as In this formula, obtained assuming constant deceleration a, S 1 is the stopping distance during the deceleration time (deceleration distance). If e.g., a = 4.0 m/ s 2 (it is this acceleration that characterizes the vehicle's ability to effectively decelerate), and the initial velocity is V 0 = 10 m/s, then the deceleration time is The total stopping distance is therefore S = S 0 + S 1 = 55 m, so that the contributions of the two main constituents of this distance are comparable in this example. Note that, as it follows from the formula for the total stopping distance, the pre-deceleration time T 0 affected by the human performance might be even more critical than the deceleration time T 1 affected by the decelerating vehicle and its breaking system. Both the vehicle's and its braking system's performance affect this time. The total stopping time is simply proportional to the initial velocity that should be low enough to avoid an accident and allow the driver to make his/ her brake-no-brake decision and push the brakes in a timely fashion. The human factor is in this example. If the actual distance S is smaller than the ASD Ŝ determined by the radar or LiNDAR then collision could possibly be avoided. In the above example, the ASD should not be smaller than, say, Ŝ = 56 m to avoid collision. The PAM, based on the Rayleigh distribution for the operational time and distance (see next section), indicates, however, that for low enough probabilities of collision the ASD should be considerably larger than that (see Table 4 data). the major role in avoiding collision, while the HF is less important. It is noteworthy in this connection that the Rayleigh distribution is an extremely conservative one. Data that are less conservative and, perhaps, more realistic could be obtained by using, say, Weibull distribution for the random times and distances.
Note that the decrease in the probabilities of collision (which is, in our approach, the probability P S that the available distance Ŝ to the obstacle is exceeded) for high The Table 4 data are based on the convolution equation for the probability P s of collision. The PDFs The computed data in Table 4 indicate that the ASD s and the deceleration ratio η have a significant effect on the probability P S of collision. This is particularly true for the ASD. Assuming that the level of P S on the order of P S = 10 -4 might be acceptable, the ratio η of the "useful" breaking distance σ 1 to the "useless", but inevitable, pre-braking distance σ 0 should be significant, higher than, say, 3, to assure a low enough probability P S of collision. The following conclusions could be drawn from the carried out analysis: 1) Probabilistic analytical modeling provides an effective means to support simulations, which will eventually help in the reduction of road casualties; is able to improve dramatically the state-of-the-art in understanding and accounting for the human performance in various vehicular missions and off-normal situations, and in particular in the pressing issue of analyzing human-vehicle handshake, i.e. the role of human performance when taking over vehicle control from the automated system; and enables quantifying, on the probabilistic basis, the likelihood of collision in an automatically driven vehicle for the situation when an immovable obstacle is suddenly detected in front of the moving vehicle; 2) The computed data indicate that it is the ASD that is, for the given initial speed, the major factor in keeping the probability of collision sufficiently low; 3) Future work should include implementation of the suggested methodology, considering that the likelihood of an accident, although never zero, could and should be predicted, adjusted to a particular vehicle, autopilot, driver and environment, and be made low enough; should consider, also on the probabilistic basis, the role of the variability of the available sight distance; Various aspects of SoH and HE characteristics are intended to be addressed in the author's future work as important items of an outer space medicine. The recently suggested three-step-concept methodology is intended to be employed in such an effort. The considered PPM/PRA approach is based on the application of the DEPDF. It is assumed that the mean time to failure (MTTF) of a human performing his/her duties is an adequate criterion of his/her failure/error-free performance: In the case of an error-free performance this time is infinitely long, and is very short in an opposite case. The suggested expression for the DEPDF considers that both high MTTF and high HCF result in a higher probability of a non-failure, but enables to separate the MTTF as the direct HF characteristic from other HCF features, such as, e.g., level of training, ability to operate under time pressure, mature thinking, etc. etc.
It should be emphasized that the DEPDFs, considered in this and in the previous author's publications, are different of the classical (Laplace, Gumbel) double-exponential distributions and are not the same for different HITL-related problems of interest. The DEPDF could be introduced, as has been shown in the author's previous publications, in many different ways depending on the particular risk-analysis field, mission or a situation, as well as on the sought information. The DEPDF suggested in this analysis considers the following major factors: Flight duration, the acceptable level of the continuously monitored (measured) human state-of-health (SoH) characteristic (symptom), the MTTF as an appropriate HE characteristic, the level of the mental workload (MWL) and the human capacity factor (HCF). It is noteworthy that while the notion of the MWL is well established in aerospace and other areas of human psychology and is reasonably well understood and investigated, the notion of the HCF was introduced by the author of this analysis only several years ago. The rationale behind that notion is that it is not the absolute MWL level, but the relative levels of the MWL and HCF that determine, in addition to other critical factors, the probability of the human failure and the likelihood of a mishap.
It has been shown that the DEPDF has its physical roots in the entropy of this function. It has been shown also how the DEPDF could be established from the highly focused and highly cost effective FOAT data. FOAT is a must, if understanding the physics of failure of instrumentation and/or of human performance is imperative to assure high likelihood of a failure-free aerospace operation. The FOAT data could be obtained by testing on a flight simulator, by analyzing the responses to post-flight questionnaires or by using Delphi technique. FOAT could not be conducted, of course, in application to humans and their health, but testing and state-of-health monitoring could be run until a certain level (threshold) of the human SH characteristic (symptom), still harmless to his/her health, is reached.
The general concepts addressed in our analysis are illustrated by practical numerical examples. It is demonstrated how the probability of a successful outcome of the anticipated aerospace mission can be assessed in advance, prior to the fulfillment of the actual operation. Although the input data in these examples are more or less hypothetical, they 5) Future work should include training a system to convolute numerically a larger number of physically meaningful non-normal distributions. The developed formalism could be used also for the case, when an obstruction is unexpectedly determined in front of a railroad (RR) train [99][100][101][102][103][104][105][106][107][108][109][110][111][112][113][114].

Quantifying the Effect of Astronaut's/Pilot's/ Driver's/Machinist's SoH on His/Hers Performance
"There is nothing more practical than a good theory"

Kurt Zadek Lewin, German-American psychologist
The subject of this section can be defined as probabilistic ergonomics science, probabilistic HF engineering, or a probabilistic human-systems technology. The paper is geared to the HITL related situations, when human performance and equipment reliability contribute jointly to the outcome of a mission or an extraordinary situation. While considerable improvements in various aerospace missions and off-normal situations can be achieved through better traditional ergonomics, better health control and work environment, and other well established non-mathematical human psychology means that affect directly the individual's behavior, health and performance, there is also a significant potential for improving safety in the air and in the outer space by quantifying the role of the HF, and human-equipment interaction by using PPM and PRA methods and approaches.
While the mental workload (MWL) level is always important and should be always considered when addressing and evaluating an outcome of a mission or a situation, the human capacity factor (HCF) is usually equally important: the same MWL can result in a completely different outcome depending on the HCF level of the individual(s) involved; in other words, it is the relative levels of the MWL and HCF that have to be considered and quantified in one way or another, when assessing the likelihood of a mission or a situation success and safety. MWL and HCF can be characterized by different means and different measures, but it is clear that both these factors have to have the same units in a particular problem of interest; It should be emphasized that one important and favorable consequence of an effort based on the consideration of the MWL and HCF roles is bridging the existing gap between what the aerospace psychologists and system analysts do. Based on the author's quite a few interactions with aerospace system analysts and avionic human psychologists, these two categories of specialists seldom team up and actively collaborate. Application of the PPM/PRA concept provides therefore a natural and an effective means for quantifying the expected HITL related outcome of a mission or a situation and for minimizing the likelihood of a mishap, casualty or a failure. By employing quantifiable and measurable ways of assessing the role and significance of various uncertainties and by treating HITL related missions and situations as part, often the most crucial part, of the complex man-instrumentation-equipment-vehicle-environment system, one could improve dramatically the human performance and the state-of-the-art in assuring aerospace missions success and safety. approach is not acceptable in this case); the MWL level; the MTTF as an appropriate HE characteristic; and the HCF.
The DEPDF could be introduced, as has been indicated, in many ways, and its particular formulation depends on the problem addressed. In this analysis we suggest a DEPDF that enables one to evaluate the impact of three major factors, the MWL G, the HCF F, and the time t (possibly affecting the navigator's performance and sometimes even his/her health), on the probability P h (F,G,t) of his/her non-failure. With an objective to quantify the likelihood of the human non-failure, the corresponding probability could be sought in the form of the following DEPDF: Here P 0 is the probability of the human non-failure at the initial moment of time (t = 0) and at a normal (low) level of the MWL (G = G 0 ), S * is the threshold (acceptable level) of the continuously monitored/measured (and possibly cumulative, effective, indicative, even multi-parametric) human health characteristic (symptom), such as, e.g., body temperature, arterial blood pressure, oxyhemometric determination of the level of saturation of blood hemoglobin with oxygen, electrocardiogram measurements, pulse frequency and fullness, frequency of respiration, measurement of skin resistance that reflects skin covering with sweat, etc. (since the time t and the threshold S * enter the expression (1) As a product S * t, each of these parameters has a similar effect on the sought probability (1)); γ S is the sensitivity factor for the symptom S * ; G ≥ G 0 is the actual (elevated, off-normal, extraordinary) MWL that could be time dependent; G 0 is the MWL in ordinary (normal) operation conditions; T * is the mean time to error/failure (MTTF); γ T is the sensitivity factor for the MTTF T * ; F ≥ F 0 is the actual (off-normal) HCF exhibited or required in an extraordinary condition of importance; F 0 is the most likely (normal, specified, ordinary) HCF. It is clear that there is a certain overlap between the levels of the HCF F and the T * value, which has also to do with the human quality. The difference is that the T * value is a short-term characteristic of the human performance that might be affected, first of all, by his/her personality, while the HCF is a long-term characteristic of the human, such as his education, age, experience, ability to think and act independently, etc. The author believes that the MTTF T * might be determined for the given individual during testing on a flight simulator, while the factor F, although should be also quantified, cannot be typically evaluated experimentally, using accelerated testing on a flight simulator. While the P 0 value is defined as the probability of non-failure at a very low level of the MWL G, it could be determined and evaluated also as the probability-of-non-failure for a hypothetical situation when the HCF F is extraordinarily high, i.e., for an individual/pilot/navigator who is exceptionally highly qualified, while the MWL G is still finite, and so is the operational time .
t Note that the above function P h (F,G, S * ) has a nice symmetric-and-consistent form. It reflects, in effect, the roles of the MWL + SoH "objective", "external", , and of the HCF + HE "subjec-are nonetheless realistic. These examples should be viewed therefore as useful illustrations of how the suggested DEPDF model can be implemented. It is the author's belief that the developed methodologies, with appropriate modifications and extensions, when necessary, can be effectively used to quantify, on the probabilistic basis, the roles of various critical uncertainties affecting success and safety of an aerospace mission or a situation of importance. The author believes also that these methodologies and formalisms can be used in many other cases, well beyond the aerospace domain, when a human encounters an uncertain environment or an hazardous off-normal situation, and when there is an incentive/need to quantify his/her qualifications and performance, and/or when there is a need to assess and possibly improve the human role in a particular HITL mission or a situation, and/or when there is an intent to include this role into an analysis of interest, with consideration of the navigator's SoH. Such an incentive always exists for astronauts in their long outer space journeys, or for long maritime travels, but could be also of importance for long enough aircraft flights, when, e.g., one of the two pilots gets incapacitated during the flight.
The analysis carried out here is, in effect, an extension of the above effort and is focused on the application of the DE-PDF in those HITL related problems in aerospace engineering that are aimed at the quantification, on the probabilistic basis, of the role of the HF, when both the human performance and, particularly, his/her SoH affect the outcome of an aerospace mission or a situation. While the PPM of the reliability of the navigation instrumentation (equipment), both hard-and software, could be carried out using well known Weibull distribution, or on the basis of the BAZ equation, or other suitable and more or less well established means, the role of the human factor, when quantification of the human role is critical, could be considered by using the suggested DEPDF. There might be other ways to go, but this is, in the author's view and experience, a quite natural and a rather effective way.
The DEPDF is of the extreme-value-distribution type, i.e. places an emphasis on the inputs of extreme loading conditions that occur in extraordinary (off-normal) situations, and disregards the contribution of low level loadings (stressors). Our DEPDF is of a probabilistic a-priori type, rather than a statistical a-posteriori type approach, and could be introduced in many ways depending on the particular mission or a situation, as well as on the sought information. It is noteworthy that our DEPDF is not a special case, nor a generalization, of Gumbel, or any other well-known statistical EVD used for many decades in various applications of the statistics of extremes, such as, e.g., prediction of the likelihood of extreme earthquakes or floods. Our DEPDF should be rather viewed as a practically useful engineering or HF related relationship that makes physical and logical sense in many practical problems and situations, and could and should be employed when there is a need to quantify the probability of the outcome of a HITL-related aerospace mission. The DEPDF suggested in this analysis considers the following major factors: Flight/operation duration; the acceptable level of the continuously monitored (measured) meaningful human SH characteristic (FOAT psychology. Various aspects of the MWL, including modeling, and situation awareness analysis and measurements, were addressed by numerous investigators. HCF, unlike MWL, is a new notion. HCF plays with respect to the MWL approximately the same role as strength/capacity plays with respect to stress/demand in structural analysis and in some economics problems. HCF includes, but might not be limited to, the following major qualities that would enable a professional human to successfully cope with an elevated off-normal MWL: Age; fitness; health; personality type; psychological suitability for a particular task; professional experience and qualifications; education, both special and general; relevant capabilities and skills; level, quality and timeliness of training; performance sustainability (consistency, predictability); independent thinking and independent acting, when necessary; ability to concentrate; awareness and ability to anticipate; ability to withstand fatigue; self-control and ability to act in cold blood in hazardous and even life threatening situations; mature (realistic) thinking; ability to operate effectively under pressure, and particularly under time pressure; leadership ability; ability to operate effectively, when necessary, in a tireless fashion, for a long period of time (tolerance to stress); ability to act effectively under time pressure and make well substantiated decisions in a short period of time and in an uncertain environmental conditions; team-player attitude, when necessary; swiftness in reaction, when necessary; adequate trust (in humans, technologies, equipment); ability to maintain the optimal level of physiological arousal. These and other qualities are certainly of different importance in different HITL situations. It is clear also that different individuals possess these qualities in different degrees. Long-term HCF could be time-dependent.
To come up with suitable figures-of-merit (FoM) for the HCF, one could rank, similarly to the MWL estimates, the above and perhaps other qualities on the scale from, say, one to ten, and calculate the average FoM for each individual and particular task. Clearly, MWL and HCF should use the same measurement units, which could be particularly non-dimensional. Special psychological tests might be necessary to develop and conduct to establish the level of these qualities for the individuals of significance. The importance of considering the relative levels of the MWL and the HCF in human-in-theloop problems has been addressed and discussed in several earlier publications of the author and is beyond the scope of this analysis.
The employed DEPDF makes physical sense. Indeed, 1) When time t, and/or the level S * of the governing SH symptom, and/or the level of the MWL G are significant, the probability of non-failure is always low, no matter how high the level of the HCF F might be; 2) When the level of the HCF F and/or the MTTF T * are significant, and the time t, and/or the level S * of the governing SoH symptom, and/or the level of the MWL G are finite, the probability P h (F,G, S * ) of the human non-failure becomes close to the probability P 0 of the human non-failure at the initial moment of time (t = 0) and at a normal (low) level of the MWL (G = G 0 ); 3) when the HCF F is on the ordinary level F 0 then tive", "internal", impact The rationale below the structures of these expressions is that the level of the MWL could be affected by the human's SH (the same person might experience a higher MWL, which is not only different for different humans, but might be quite different depending on the navigator's SH), while the HCF, although could also be affected by the state of his/her health (SH), has its direct measure in the likelihood that he/she makes an error. In our approach this circumstance is considered by the T * value, mean time to error (MTTF), since an error is, in effect, the failure to an error-free performance. When the human's qualification is high, the likelihood of an error is lower. The "external" E = MWL + SoH factor is more or less a short term characteristic of the human performance, while the factor I = HCF + HE is a more permanent, more long term characteristic of the HCF and its role. It is noteworthy that the links between the human's mind (MWL) and his/her body (SoH) are closely linked and that such links are far from being well defined and straightforward. The suggested formalism to consider this circumstance is just a possible way to account for such a link. Difficulties may arise in some particular occasions when the MWL and the SH factors overlap. It is anticipated therefore that the MWL impact in the suggested formalism considers, to an extent possible, various more or less important impacts other than the SoH related one.
Measuring the MWL has become a key method of improving aviation safety, and there is an extensive published work devoted to the measurement of the MWL in aviation, both military and commercial. Pilot's MWL can be measured using subjective ratings and/or objective measures. The subjective ratings during FOAT (simulation tests) can be, e.g., after the expected failure is defined, in the form of periodic inputs to some kind of data collection device that prompts the pilot to enter, e.g., a number between 1 and 10 to estimate the MWL every few minutes. There are also some objective MWL measures, such as, e.g., heart rate variability. Another possible approach uses post-flight questionnaire data: it is usually easier to measure the MWL on a flight simulator than in actual flight conditions. In a real aircraft, one would probably be restricted to using post-flight subjective (questionnaire) measurements, since a human psychologist would not want to interfere with the pilot's work. Given the multidimensional nature of MWL, no single measurement technique can be expected to account for all the important aspects of it. In modern military aircraft, complexity of information, combined with time stress, creates significant difficulties for the pilot under combat conditions, and the first step to mitigate this problem is to measure and manage the MWL. Current research efforts in measuring MWL use psycho-physiological techniques, such as electroencephalographic, cardiac, ocular, and respiration measures in an attempt to identify and predict MWL levels. Measurement of cardiac activity has been also a useful physiological technique employed in the assessment of MWL, both from tonic variations in heart rate and after treatment of the cardiac signal. Such an effort belongs to the fields of astronautic medicine and aerospace human Then we find, The P values calculated for the case T * = 0 (human error is likely, but could be rapidly corrected because of the high HCF of the performer) indicate that: 1) At normal MWL level and/or at an extraordinarily (exceptionally) high HCF level the probability of human non-failure is close to 100%; 2) If the MWL is exceptionally high, the human will definitely fail, no matter how high his/her HCF is; 3) If the HCF is high, even a significant MWL has a small effect on the probability of non-failure, unless this MWL is exceptionally large (indeed, highly qualified individuals are able to cope better with various off-normal situations and get tired less when time progresses than individuals of ordinary capacity); 4) The probability of non-failure decreases with an increase in the MWL (especially for relatively low MWL levels) and increases with an increase in the HCF (especially for relatively low HCF levels); 5) For high HCFs the increase in the MWL level has a much smaller effect on the probabilities of non-failure than for low HCFs; it is noteworthy that the above intuitively more or less obvious judgments can be effectively quantified by using analyses based on Eqs. (1) and (4); 6) The increases in the HCF (F /F 0 ratio) and in the MWL (G/G 0 ratio) above the 3.0 has a minor effect on the probability of non-failure; this means particularly that the navigator does not have to be trained for an extraordinarily high MWL and/or possess an exceptionally high HCF (F /F 0 ratio), higher than 3.0, compared to a navigator of an ordinary capacity (qualification); in other words, a navigator does not have to be a superman or a superwoman to successfully cope with a high level MWL, but still has to be trained to be able to cope with a MWL by a factor of three higher than the normal level. If the requirements For a long time in operation (t →∞) and/or when the level S * of the governing SH symptom is significant (S * →∞) and/ or when the level G of the MWL is high, the probability of non-failure will always be low, provided that the MTTF T * is finite; 4) at the initial moment of time (t = 0) and/or for the very low level of the SH symptom S * (S * = 0) the formula yields: When the MWL G is high, the probability of non-failure is low, provided that the MTTF T * and the HCF F are finite. However, when the HCF is extraordinarily high and/or the MTTF T * is significant (low likelihood that HE will take place), the above probability of non-failure will close to one. In connection with the taken approach it is noteworthy also that not every model needs prior experimental validation. In the author's view, the structure of the suggested models does not. Just the opposite seems to be true: this model should be used as the basis of FOAT oriented accelerated experiments to establish the MWL, HCF, and the levels of HE (through the corresponding MTTF) and his/her SoH at normal operation conditions and for a navigator with regular skills and of ordinary capacity. These experiments could be run, e.g., on different flight simulators and on the basis of specially developed testing methodologies. Being a probabilistic, not a statistical model, the equation (1) should be used to obtain, interpret and to accumulate relevant statistical information. Starting with collecting statistics first seems to be a time consuming and highly expensive path to nowhere.
Assuming, for the sake of simplicity, that the probability P 0 is established and differentiating the expression With respect to the time t the following formula can be obtained: When the MWL G is on its normal level G 0 and/or when the still accepted SH level S * is extraordinarily high, the above formula yields: Hence, the basic distribution for the probability of non-failure is a generalization of the situation, when the decrease in the probability of human performance non-failure with time can be evaluated as the ratio of the entropy ( ) H P  of the above distribution to the elapsed time t, provided that the MWL is on its normal level and/or the HCF of the navigator is exceptionally high. At the initial moment of time (t = 0) and/or when the governing symptom has not yet manifested itself (S * = 0) we obtain: for a particular level of safety are above the HCF for a well educated and well trained human, then the development and employment of the advanced equipment and instrumentation should be considered for a particular task, and the decision about the right way to go should be based on the evaluation, also, preferably, on the probabilistic basis, of both the human and the equipment performance, costs, time-to-completion ("market") and the possible consequences of failure.
In the basic DEPDF (1) there are three unknowns: the probability P 0 and two sensitivity factors γ S and γ T . As has been mentioned above, the probability P 0 could be determined by testing the responses of a group of exceptionally highly qualified individuals, such as, e.g., Captain Sullenberger in the famous Miracle on the Hudson event. Let us show how the sensitivity factors γ S and γ T can be determined. The Eq. (4) can be written as Let FOAT be conducted on a flight simulator for the same group of individuals, characterized by the more or less the same high MTTF T * values and high HCF 0 F F ratios, at two different elevated (off-normal) MWL conditions, G 1 and G 2 . Let the gov-erning symptom, whatever it is, has reached its critical pre-established level S * at the times t 1 and t 2 , respectively, from the beginning of testing, and the corresponding percentages of the individuals that failed the tests were Q 1 and Q 2 , so that the corresponding probabilities of non-failure were 1 P  and 2 P  , respectively. Since the same group of individuals was tested, the right part of the above equation that reflects the levels of the HCF and HE remains more or less unchanged, and therefore the requirement Should be fulfilled, This equation yields: After the sensitivity factor γ S for the assumed symptom level S * is determined, the dimensionless variable γ T T * , associated with the human error sensitivity factor γ T , could be evaluated. The equation (10) can be written in this case as follows: For normal values of the HCF and high values of the MWL The product γ T T * should be always positive and therefore the condition Open Access | Page 243 | When the probability P changes from 1 P = to 0 P = , the t * value changes from Let FOAT has been conducted on a flight simulator or by using another suitable testing equipment for a group of individuals characterized by high HCF ). With these input data the above formula for the sensitivity factor γ s yields: ( ) These results indicate particularly the importance of the HCF and that even a relatively insignificant increase in the HCF above the ordinary level can lead to an appreciable increase in the probability of human non-failure. Clearly, training and indi-vidual qualities are always important.
Let us assess now the sensitivity factor γ T of the human error measured as his/her time to failure (to make an error proach just helps to bridge the gap between what one "sees" as a result of the appropriate FOAT and what he/she will supposedly "get" in the actual "field"/"habitat" conditions. Let, e.g., the challenge of adaptation to a space flight and new planetary environments is addressed, a particular species is tested until "death" (whatever the indication on it might be), and the role of temperature T and gravity G are considered in the astro-biological experiment of importance. This experiment corresponds to the FOAT (testing to failure) in electronic and photonic reliability engineering. Then the double-exponential BAZ equation can be written for the application in question as follows: Here the C * value is an objective quantitative evidence/ indication that the particular organism or a group of organisms died, U 0 is the stress-free activation energy that characterizes the health or the typical longevity of the given species, and "gammas" are sensitivity factors. The above equation contains three unknowns: The stress-free activation energy U 0 and two sensitivity factors. These unknowns could be determined from the available observed data or from specially designed, carefully conducted and thoroughly analyzed FOAT. At the first step testing at two constant temperature levels, T 1 and T 2 , are conducted, while the gravity stressor G and the effective energy level 0 = = ln remain the same in both sets of tests. The notation * ln P n C t − = is introduced here. Since the left part of the above equation is kept the same in both sets of tests, this equation results in the following formula for the γ C factor:  T T θ = is the temperature ratio. The second step of testing should be conducted at two different G levels. Since the stress-free activation energy should be the same in both sets of tests, the factor γ G could be found as , where the n 1 and n 2 values should be determined using the above formula, but are, of course, different from these values obtained as a result of the first step of testing. Note that the γ G factor is independent of the factor γ C . The stress-free activation energy can be found as The result should be, of course, the same whether the index "1" or "2" is used. It is noteworthy that the suggested approach is expected to be more accurate for low temperature The results are rather close, so that in an approximate analysis one could accept γ T T * ≈ 3.4. After the sensitivity factors for the HE and SH aspects of the HF are determined, the computations of the probabilities of non-failure for any levels of the MWL and HCF can be made.
The following conclusions can be drawn from the carried out analyses. The suggested DEPDF for the human non-failure can be applied in various HITL related aerospace problems, when human qualification and performance, as well as his/ her state of health are crucial, and therefore the ability to quantify them is imperative, and since nothing and nobody is perfect, these evaluations could and should be done on the probabilistic basis. The MTTF is suggested as a suitable characteristic of the likelihood of a human error: if no error occurs in a long time, this time is significant; in the opposite situation -it is very short. MWL, HCF, time and the acceptable levels of the human health characteristic and his/her propensity to make an error are important parameters that determine the level of the probability of non-failure of a human in when conducting a flight mission or in an extraordinary situation, and it is these parameters that are considered in the suggested DEPDF. The MWL, the HCF levels, the acceptable cumulative human health characteristic and the characteristic of his/her propensity to make an error should be established depending on the particular mission or a situation, and the acceptable/ adequate safety level -on the basis of the FOAT data obtained using flight simulation equipment and instrumentation, as well as other suitable and trustworthy sources of information, including, perhaps, also the well known and widely used Delphi technique (method). The suggested DEPDF based model can be used in many other fields of engineering and applied science as well, including various fields of human psychology, when there is a need to quantify the role of the human factor in a HITL situation. The author does not claim, of course, that all the i's are dotted and all the t's are crossed by the suggested approach. Plenty of additional work should be done to "reduce to practice" the findings of this paper, as well as those suggested in the author's previous HITL related publications.

Survivability of Species in Different Habitats
"There were sharks before there were dinosaurs, and the reason sharks are still in the ocean is that nothing is better at being a shark than a shark." Eliot Peper, American writer Survivability of species in different habitats is important, particularly, in connection with travel to and exploring the habitat conditions in the outer space. The BAZ equation enables to consider the effects of as many stressors as necessary, such as, say, radiation, hygrometry, oxygen rate, pressure, etc. It should be emphasized that all the stressors of interest are applied simultaneously/concurrently, and this will take care of their possible coupling, nonlinear effects, etc. The physically meaningful and highly flexible kinetic BAZ ap-conditions, below the melting temperature for ice, which is 0 °C = 273 K. It has been established, at least for microbes, that the survival probabilities below and above this temperature are completely different. It is possible that the absolute temperature in the denominator of the original BAZ equation and multi-parametric equations should be replaced, in order to account for the non-linear effect of the absolute temperature, by, say, T m value, where the exponent m is different of one. Let, e.g., the criterion of the death of the tested species is, say, C * = 100 (whatever the units), and testing is conducted until half of the population dies, so that P 1,2 = 0.5. This happened after the first step of testing was conducted for t 1 = 50 h. After the temperature level was increased by a factor of 4, so that The second step of testing was conducted at the temperature level of T = 20 K, and half of the tested population failed after t 1 = 100 h, when testing was conducted at the gravity level of G 1 = 10 m/s 2 , and after t 2 = 30 h, when the gravity level was twice as high. The thermal energy kT = 8.6173303 × 10 -5 × 20 = 17.2347 × 10 -4 eV was the same in both cases.
The stress-free activation energy (this energy characterizes the biology of a particular species) is as follows: or, to make sure that there was no numerical error, could be evaluated also as This energy characterizes the nature of a particular species from the viewpoint of its survivability in the outer space under the given temperature and gravity condition and for the given duration of time. Clearly, in a more detailed analysis the role of other environments factors, such as, say, vacuum, temperature variations and extremes, radiation, etc., can also be considered. From the formula (8) we obtain the following expression for the lifetime (time to failure/death) for G = 10: This relationship is tabulated in the following Table 5.
In the case of G = 0, we have: t = 467.6846lnP. This relationship is tabulated in the Table 6.
Lower gravity resulted in this example in a considerably longer lifetime. It is noteworthy that at the microbiological level the effect of gravitational forces might be considerably less significant than, say, electromagnetic or radiation influences. As a matter of fact, BAZ method has been recently employed in application to electron devices subjected to radiation [45], and the approach is certainly applicable to the biological problem addressed in this paper. Different types of radiation are well known "life killers". In the case of G = 0, we have: t = 467.6846lnP. Analytical modeling, although not very popular today in electronic instrumentation reliability and human performance predictions, occupies nonetheless a special place in a physically meaningful and trustworthy predictive modeling effort [1][2][3][4][5][6][7][8]. This is because analytical modeling is able not only to come up with relationships that clearly indicate "what affects what" and "what is responsible for what", but, more importantly, can often explain the underlying physics of phenomena affecting the situations of interest better than the numerical simulations, or even experiments, can. Computer simulations have become now the major research and design tool in engineering. This should be attributed, first of all, to the availability of powerful and flexible computer programs that enable to obtain, within a reasonable time, a solution to almost any problem of interest. Broad application of computers has by no means made analytical solutions unnecessary or even less important. Simple, easy-to-use and physically meaningful analytical relationships have invaluable advantages, because of the clarity and compactness of the information and explicit indication on the role of various critical factors affecting the given phenomenon, the behavior of the material or the device, the human performance, and the human-system interaction and integration. But even when application of computer simulations is straightforward and encounters no difficulties, it is always advisable to investigate the problem analytically before carrying out computer-aided analyses. Such a preliminary investigation helps to reduce computer time and expense, develop the most feasible and effective computer model and, in many cases, avoid fundamental errors. Another consideration in favor of analytical modeling is associated with an illusion of simplicity in applying simulation procedures. Some users of numerous software programs believe that the "black box" they deal with will automatically provide the right answer, as long as they push the right key on the computer keyboard. It is well known to those with hands-in experience with various computer simulation programs that although it might be indeed easy to obtain a solution, there is no guarantee that the right solution has been obtained. And how would one know that it is the right solution, if there is nothing to compare it with? Clearly, if the simulation data are in good agreement with the results of an analytical modeling (which is typically based on different assumptions and uses different calculation techniques), then there is a reason to believe that the obtained solution is accurate enough.
A crucial requirement for an effective analytical model is its simplicity and clear physical meaning. A good analytical model, to be of real help, should be based on physically meaningful considerations and produce simple and easy-to-use relationships, clearly indicating the role of the major factors affecting the phenomenon or the object of interest. Arrhenius and BAZ models, not to mention the famous E = mc 2 formula, are of that type. One authority in applied physics remarked, perhaps only partly in jest, that the degree of understanding of a phenomenon or a process is inversely proportional to the number of variables used for its description. It takes a lot of imagination and good intuition to come up with appropriate assumptions to develop a meaningful analytical expression, while it is merely skills that are usually needed to apply suitable simulation software.
In connection with any modeling, whether computer-aided or analytical, it is advisable, of course, to assess its need, accuracy and suitability in a particular reliability and/or safety related effort of importance by considering its role and applicability based on the results of an experiment and/or meaningful posterior statistics. It should be pointed out also that the limitation of a particular theoretical model could be also assessed based on a more general theoretical model. E.g., limitations of a linear approach could be determined on the basis of a more general non-linear model.
Although an experimental or a statistical approach, unsupported by a theory is "blind", a theory, not supported by an experiment or a suitable and meaningful trustworthy statistics, is "dead". It is an experiment or trustworthy statistics of observed failures or mishaps that form a basis for a theoretical model, provides the input data for such modeling, and determines the viability, accuracy, and limits of application of a theoretical model. Limitations of a theoretical model are different in different problems and, in the majority of cases, are not known beforehand. It is the "experimental modeling", which is the "supreme and ultimate judge" of a theoretical model.

Multi-parametric BAZ equation
Boltzmann-Arrenius-Zhurkov (BAZ) equation suggested in 1957 by the Russian physisist S.N. Zhurkov [9,10] was used by him and his associates in application to experimental fracture mechanics. Here τ is the mean time to failure (MTTF), U 0 is the loading independent activation energy (the term was coined by the Swedish chemist S. Arrhenius), kT is the thermal energy caused by the elevated temperature T and evaluated as the product of the Boltzmann's constant k = 8.6173303 × 10 -5 eV/K and the absolute temperature T, γσ is the strain energy due to the external tensile mechanical loading σ, if any, γ is the stress sensitivity factor, and τ 0 is the time constant that is supposed to be determined for the given material, along with the U 0 and γ values, experimentally. The equation (B-1) is a generalization of the well-known Arrhenius equation [11]  It has been recently shown [15] that the equation (B-1) and its special case (B-2) can be obtained as steady-state solutions to the Fokker-Planck equation in the theory of Markovian processes (see, e.g., [16]), when the activation energy is treated as a random variable whose evolution can be described in terms of a Markovian process. It was found that the steady-state solutions represent the worst case scenarios, i.e., the highest effective activation energies, so that the reliability predictions based on the steady-state BAZ model (B-1) are reasonably conservative and, hence, advisable for engineering applications.
In Zhurkov's tests the loading σ was always a constant mechanical tensile stress, and the test specimens were always notched ones (Figure 2). It has been recently shown [17][18][19][20][21][22][23] that any other physically meaningful loading (stimulus) of importance, such as, e.g., voltage, current, thermal stress, elevated humidity, mechanical shocks, vibrations, radiation, light output, etc., can be employed as an appropriate stressor in accelerated testing. It was suggested also that, since the principle of superposition. It was suggested also that, since principle of superposition does not work in the reliability engineering (because of unpredictable nonlinear and coupling effects), even a combination of relevant stimuli can be considered. In other words, it has been suggested that the relationship (B-6) could be replaced with a multi-parametric relationship Where σ i are the applied stressors, and γ i are the corresponding sensitivity factors. This generalization has been suggested in connection with the development of a highly powerful and highly flexible concept of the PDfR [17][18][19][20] of E&P materials, devices, assemblies, packages and systems. The PDfR concept quantifies, on the probabilistic basis, the lifetime of an E&P product using the results of a highly focused and highly cost-effective FOAT [21][22][23][24] data.
Considering (B-7), the expression (B-5) for the probability of non-failure can be written in the following multi-parametric form: radation) of the material of interest, but the equation (B-1) considers also the effect of the external mechanical loading, if any, on both the long-and the short-term strength of the material. In the original Arrhenius theory the activation energy U 0 characterizes the material's propensity to get engaged into a chemical reaction, while in the Zhurkov's model it considers, first of all, the material's propensity to the propagation of a fracture. The equation (B-2) is formally not different of one of the modifications of Boltzmann's equation [9][10][11][12][13][14] in the kinetic theory of gases. Boltzmann's theory postulates that the absolute temperature of an ideal gas is determined by the average probability of the collisions of its particles (atoms or molecules). Arrhenius was member of Boltzmann's team in the University of Graz, Austria, in 1887 and observed an analogy between Boltzmann's equation (B-2) for the gas energy and the chemical energy barrier, the "activation energy", which had to be got over to trigger a chemical reaction.
In the equations (B-1) and (B-2) the MTTF τ is interpreted as the time, when the entropy