A Bayesian predictive approach to determining the number of by Dey D. K., Kuo L., Sahu S. K.

By Dey D. K., Kuo L., Sahu S. K.

This paper describes a Bayesian method of blend modelling and a mode in response to predictive distribution to figure out the variety of parts within the combos. The implementation is finished by using the Gibbs sampler. the tactic is defined throughout the combos of standard and gamma distributions. research is gifted in a single simulated and one genuine information instance. The Bayesian effects are then in comparison with the possibility process for the 2 examples.

Show description

Read or Download A Bayesian predictive approach to determining the number of components in a mixture distribution PDF

Similar probability books

Empirical likelihood

Empirical chance presents inferences whose validity doesn't rely on specifying a parametric version for the information. since it makes use of a chance, the tactic has sure inherent merits over resampling equipment: it makes use of the knowledge to figure out the form of the boldness areas, and it makes it effortless to mixed information from a number of assets.

Shape Optimization under Uncertainty from a Stochastic Programming Point of View

Optimization difficulties are suitable in lots of components of technical, commercial, and monetary functions. while, they pose difficult mathematical examine difficulties in numerical research optimization. Harald Held considers an elastic physique subjected to doubtful inner and exterior forces. on account that easily averaging the prospective loadings will lead to a constitution that may now not be powerful for the person loadings, he makes use of concepts from point set-based form optimization and two-stage stochastic programming.

OECD Glossary of Statistical Terms (OECD Glossaries)

###############################################################################################################################################################################################################################################################

Statistics and Probability Theory: In Pursuit of Engineering Decision Support

Of useful relevance - but theoretically stable and consistant
Emphasis on relevance for engineering determination help and assessments
Written for engineers by way of an engineer

This ebook presents the reader with the elemental abilities and instruments
of records and chance within the context of engineering modeling and research. The emphasis is at the software and the reasoning at the back of the applying of those abilities and instruments for the aim of improving determination making in engineering.

The goal of the publication is to make sure that the reader will gather the mandatory theoretical foundation and technical talents comparable to to believe ok with the idea of easy information and chance. in addition, during this e-book, in place of many ordinary books at the related topic, the viewpoint is to target using the idea for the aim of engineering version development and choice making. This paintings is appropriate for readers with very little previous wisdom just about facts and probability.

Content point » Professional/practitioner

Keywords » Bayesian likelihood idea - Engineering version construction - Engineering determination help - chance - Statistics

Related topics » actual & details technological know-how - likelihood idea and Stochastic tactics - construction & strategy Engineering

Additional info for A Bayesian predictive approach to determining the number of components in a mixture distribution

Example text

Iδ To display R(µ) we need to compute it at several values. Let µ(i) = X for some δ > 0 and integer i ≥ 0. A good strategy to compute the right side of the empirical likelihood ratio curve is to compute R(µ(i) ) for i increasing from 0, where R(µ(0) ) = 1, until log(R(µ(i) ) is too small to be of interest, but in any case stopping before µ(i) > X(n) . 0 corresponds to a nominal χ2(1) value of −2 × 25 = 50. 5 × 10−12 , and we seldom need to consider p-values smaller than this. When searching for λ(µ(i) ), a good starting value is λ(µ(i−1) ), and we may begin with λ(µ0 ) = 0.

Let the ordered sample values be X(1) ≤ . . ≤ X(n) . First we eliminate the trivial cases. If µ < X(1) or µ > X(n) then there are no weights wi ≥ 0 summing to 1 for which i wi Xi = µ. In such cases we take log R(µ) = −∞, and R(µ) = 0 by convention. Similarly if µ = X(1) < X(n) or µ = X(n) > X(1) we take R(µ) = 0, but if X(1) = X(n) = µ, we take R(µ) = 1. Now consider the nontrivial case, with X(1) < µ < X(n) . We seek to maxn imize i nwi , or equivalently i=1 log(nwi ) over wi ≥ 0 subject to the conn n straints that i=1 wi = 1 and i=1 wi Xi = µ.

The marginal likelihood is the likelihood from the marginal distribution of the Xi as if we had not observed Yi . The conditional likelihood is the likelihood from the conditional distribution of Yi given Xi as if the Xi were not random. We may prefer to use marginal likelihood estimating equations E ∂ log fX (Xi ; θ, ν) ∂θ =0 or conditional ones E ∂ log fY |X (Yi | Xi ; θ, ν) ∂θ = 0. For example, if θ only appears in one of the likelihood factors Lm and Lc , then there is no information lost by using that factor as the likelihood, and there may be a computational advantage.

Download PDF sample

Rated 4.15 of 5 – based on 35 votes