Browsing Tag

Markov random fields

1 post

Definition

Markov random fields refers to a class of graphical models that represent the conditional independence structure of multivariate data by depicting variables as nodes and direct associations as edges between them. In Markov random fields, the absence of an edge between two variables indicates that they are conditionally independent after controlling for all remaining variables in the network. Bayesian approaches to analyzing Markov random fields employ prior distributions on both network structure and parameters, along with the edge inclusion Bayes factor, to test for conditional independence and quantify uncertainty about network parameters.

Sources: Sekulovski et al. (2024)

Related Terms

Applications

Markov Random Fields and Conditional Independence

Markov random fields model the underlying conditional independence structure of data, with the absence of an edge between two variables indicating that they are conditionally independent after controlling for the remaining variables in the network. The edge inclusion Bayes factor, a Bayesian method for testing conditional independence in Markov random field models, addresses the problem of distinguishing between the absence of evidence and the evidence of the absence of edges.

Sources: Sekulovski et al. (2024)

Markov Random Fields and Prior Distributions

Bayesian analysis of Markov random field models requires specifying two sets of prior distributions—one for the network structure and another for edge weight parameters—and the choice of these prior distributions has a significant impact on the edge inclusion Bayes factor used to test for conditional independence. The scale of the prior distribution on edge weight parameters is a critical factor, as even small variations can substantially alter the Bayes factor's sensitivity and its ability to distinguish between the presence and absence of edges.

Sources: Sekulovski et al. (2024)

Research Articles