Package edu.cmu.tetrad.search
package edu.cmu.tetrad.search
Contains classes for searching for (mostly structural) causal models given data.
-
ClassDescriptionUses BOSS in place of FGES for the initial step in the GFCI algorithm.Implements Best Order Score Search (BOSS).Implements an algorithm which first finds a CPDAG for the variables and then uses a non-Gaussian orientation method to orient the undirected edges.Implements the Build Pure Clusters (BPC) algorithm, which allows one to identify clusters of measured variables in a dataset that are explained by a single latent.Implemented the Cyclic Causal Discovery (CCD) algorithm by Thomas Richardson.Adjusts FCI (see) to use conservative orientation as in CPC (see).Identifies violations of knowledge for a given graph.CompositeIndependenceTest class.The type of conditioning set to use for the Markov check.Implements a convervative version of PC, in which the Markov condition is assumed but faithfulness is tested locally.CpdagParentDistancesFromTrue computes the distances between true edge strengths in a true DAG and the range of estimated edge strengths in an output CPDAG.The type of distance to calculate.Implements the CStaR algorithm (Stekhoven et al., 2012), which finds a CPDAG of that data and then tries all orientations of the undirected edges about a variable in the CPDAG to estimate a minimum bound on the effect for a given edge.An enumeration of the options available for determining the CPDAG used for the algorithm.Represents a single record in the returned table for CSTaR.An enumeration of the methods for selecting samples from the full dataset.Implements the DAGMA algorithm.Implements the Direct-LiNGAM algorithm.Gives an interface for classes where the effective sample size can be set by the user.Implements the classical Factor Analysis algorithm.Implements the Fast Adjacency Search (FAS), which is the adjacency search of the PC algorithm (see).Adjusts FAS (see) for the deterministic case by refusing to removed edges based on conditional independence tests that are judged to be deterministic.Implements the FASK (Fast Adjacency Skewness) algorithm, which makes decisions for adjacency and orientation using a combination of conditional independence testing, judgments of nonlinear adjacency, and pairwise orientation due to non-Gaussianity.Enumerates the options left-right rules to use for FASK.Implements the FASK (Fast Adjacency Skewness) algorithm, which makes decisions for adjacency and orientation using a combination of conditional independence testing, judgments of nonlinear adjacency, and pairwise orientation due to non-Gaussianity.Enumerates the alternatives to use for finding the initial adjacencies for FASK.Enumerates the options left-right rules to use for FASK.Translates a version of the FastICA algorithm used in R from Fortran into Java for use in Tetrad.A list containing the following componentsImplements the Fast Causal Inference (FCI) algorithm due to Peter Spirtes, which addressed the case where latent common causes cannot be assumed not to exist with respect to the data set being analyzed.Modifies FCI to do orientation of unshielded colliders (X*-*Y*-*Z with X and Z not adjacent) using the max-P rule (see the PC-Max algorithm).A simple implementation of Dijkstra's algorithm for finding the shortest path in a graph.Represents a node in Dijkstra's algorithm.Represents a graph for Dijkstra's algorithm.Implements the Fast Greedy Equivalence Search (FGES) algorithm.Implements the Fast Greedy Equivalence Search (FGES) algorithm.Implements the Find One Factor Clusters (FOFC) algorithm by Erich Kummerfeld, which uses reasoning about vanishing tetrads of algorithms to infer clusters of the measured variables in a dataset that each be explained by a single latent variable.Gives the options to be used in FOFC to sort through the various possibilities for forming clusters to find the best options.Implements the Find Two Factor Clusters (FOFC) algorithm, which uses reasoning about vanishing tetrads of algorithms to infer clusters of the measured variables in a dataset that each be explained by a single latent variable.Gives the options to be used in FOFC to sort through the various possibilities for forming clusters to find the best options.Implements a modification of FCI that started by running the FGES algorithm and then fixes that result to be correct for latent variables models.Implements the GRaSP algorithms, which uses a certain procedure to search in the space of permutations of variables for ones that imply CPDAGs that are especially close to the CPDAG of the true model.Uses GRaSP in place of FGES for the initial step in the GFCI algorithm.Implements the Grow-Shrink algorithm of Margaritis and Thrun, a simple yet correct and useful Markov blanket search.Implements the ICA-LiNGAM algorithm.Implements the ICA-LiNG-D algorithm as well as a some auxiliary methods for ICA-LiNG-D and ICA-LiNGAM.Implements the IDA algorithm.Gives a list of nodes (parents or children) and corresponding minimum effects for the IDA algorithm.This calculates total effects and absolute total effects for an MPDAG G for all pairs distinct (x, y) of variables, where the total effect is obtained by regressing y on x ∪ S and reporting the regression coefficient.Gives an interface for fast adjacency searches (i.e., PC adjacency searches).Gives an interface for a search method that searches and returns a graph.Gives an interface for Markov blanket searches.Gives an interface that can be implemented by classes that do conditional independence testing.Checks independence results by listing all tests with those variables, testing each one, and returning the resolution of these test results.Implements a number of methods which take a fixed graph as input and use linear, non-Gaussian methods to orient the edges in the graph.Give a list of options for rules for doing the non-Gaussian orientations.Gives a list of options for non-Gaussian transformations that can be used for some scores.LV-Dumb is a class that implements the IGraphSearch interface.The LV-Lite algorithm (Latent Variable "Lite") algorithm implements a search algorithm for learning the structure of a graphical model from observational data with latent variables.The ExtraEdgeRemovalStyle enum specifies the styles for removing extra edges.Enumeration representing different start options.Checks whether a graph is Markov given a data set.Stores the set of m-separation facts and the set of m-connection facts for a graph, for the global check.A single record for the results of the Markov check.Provides an implementation of Mimbuild, an algorithm that takes a clustering of variables, each of which is explained by a single latent, then forms the implied covariance matrix over the latent variables, then runs a CPDAG search to in the structure over the latent themselves.Implements Mimbuild using the theory of treks and ranks.The ModelObserver interface is implemented by classes that want to observe changes in a model.Implements the Peter/Clark (PC) algorithm, which uses conditional independence testing as an oracle to first of all remove extraneous edges from a complete graph, then to orient the unshielded colliders in the graph, and finally to make any additional orientations that are capable of avoiding additional unshielded colliders in the graph.Modifies the PC algorithm to handle the deterministic case.Searches for a CPDAG representing all the Markov blankets for a given target T consistent with the given independence information.Implements common elements of a permutation search.Implements the Really Fast Causal Inference (RFCI) algorithm, which aims to do a correct inference of inferrable causal structure under the assumption that unmeasured common causes of variables in the data may exist.This class provides methods for finding sepsets in a given graph.Implements the SP (Sparsest Permutation) algorithm.Uses SP in place of FGES for the initial step in the GFCI algorithm.An interface for suborder searches for various types of permutation algorithms.Adapts FAS for the time series setting, assuming the data is generated by a SVAR (structural vector autoregression).Adapts FCI for the time series setting, assuming the data is generated by a SVAR (structural vector autoregression).Adapts FGES for the time series setting, assuming the data is generated by a SVAR (structural vector autoregression).Represents a GFCI search algorithm for structure learning in causal discovery.