/dports/biology/ncbi-cxx-toolkit/ncbi_cxx--25_2_0/src/algo/gnomon/ |
H A D | gnomon.asn | 19 start Markov-chain-array , 20 stop Markov-chain-array , 21 donor Markov-chain-array , 22 acceptor Markov-chain-array , 23 coding-region SEQUENCE OF Markov-chain-params , -- three elements (per phase) 24 non-coding-region Markov-chain-params } } 54 Markov-chain-params ::= SEQUENCE { 58 prev-order Markov-chain-params, 61 Markov-chain-array ::= SEQUENCE { 64 matrix SEQUENCE OF Markov-chain-params -- in-exon+in-intron elements
|
/dports/security/john/john-1.9.0-jumbo-1/doc/ |
H A D | MARKOV | 20 [Markov:MODE]. The Markov mode name is not case sensitive. 22 [Markov:Default] is used. 24 * LEVEL is the "Markov level". 30 [Markov::MODE] section (the MkvLvl = xxx item) 34 read from config variables in the [Markov:mode] section, or [Markov:Default] 127 [Markov:MODE]. 141 or if the Markov level specified on the command line is 0. 154 or if the Markov level specified on the command line was 0. 155 (In other words, when the max. Markov level is read from the Markov mode 156 section, the min. Markov level will be read from the Markov mode section as [all …]
|
/dports/math/4ti2/4ti2-Release_1_6_9/src/groebner/ |
H A D | Markov.cpp | 39 Markov::Markov(Generation* _gen) in Markov() function in Markov 45 Markov::~Markov() in ~Markov() 50 Markov::compute( in compute() 83 Markov::compute( in compute() 119 Markov::algorithm( in algorithm() 189 Markov::fast_algorithm( in fast_algorithm()
|
H A D | Markov.h | 36 class Markov 39 Markov(Generation* gen = 0); 40 virtual ~Markov();
|
/dports/science/agrum/aGrUM-29e540d8169268e8fe5d5c69bc4b2b1290f12320/wrappers/pyAgrum/doc/sphinx/ |
H A D | markovNetwork.rst | 1 Markov Network 6 :alt: a Markov network as an unoriented graph and as a factor graph 8 A Markov network is a undirected probabilistic graphical model. It represents a joint distribution … 10 A Markov network uses a undirected graph to represent conditional independence in the joint distrib… 21 * `Tutorial on Markov Network <https://lip6.fr/Pierre-Henri.Wuillemin/aGrUM/docs/current/notebooks/…
|
H A D | MNInference.rst | 3 …n from a Markov network and some evidence. aGrUM/pyAgrum mainly focus and the computation of (join… 4 … task (NP-complete). For now, aGrUM/pyAgrum implements only one exact inference for Markov Network.
|
/dports/math/openturns/openturns-1.18/python/src/ |
H A D | DiscreteMarkovChain_doc.i.in | 2 "Discrete Markov chain process. 7 Probability distribution of the Markov chain origin, i.e. state of the process at :math:`t_0`. 22 A discrete Markov chain is a process :math:`X: \Omega \times \cD \rightarrow E`, where :math:`\cD=\… 47 Create a Markov chain: 75 The probability distribution of the origin of the Markov chain." 93 The probability distribution of the origin of the Markov chain." 97 "Compute the stationary distribution of the Markov chain. 102 The stationary probability distribution of the Markov chain: 108 Compute the stationary distribution of a Markov chain:
|
/dports/math/octave-forge-queueing/queueing/ |
H A D | DESCRIPTION | 6 Title: Octave package for Queueing Networks and Markov chains analysis 8 networks and Markov chains analysis. This package can be used to 13 performance measures for Markov chains can be computed, such as state 15 sojourn times and so forth. Discrete- and continuous-time Markov
|
H A D | INDEX | 1 queueing >> Queueing Networks and Markov chains 2 Discrete-time Markov chains 11 Continuous-time Markov chains
|
/dports/math/octave-forge-queueing/queueing/doc/ |
H A D | markovchains.texi | 24 @node Markov Chains 25 @chapter Markov Chains 28 * Discrete-Time Markov Chains:: 29 * Continuous-Time Markov Chains:: 32 @node Discrete-Time Markov Chains 33 @section Discrete-Time Markov Chains 85 @cindex Markov chain, discrete time 87 @cindex discrete time Markov chain 107 @cindex discrete time Markov chain 109 @cindex irreducible Markov chain [all …]
|
/dports/math/R-cran-mcmc/mcmc/man/ |
H A D | initseq.Rd | 5 Variance of sample mean of functional of reversible Markov chain 13 Markov chain.} 25 assuming the input time series is a scalar-valued functional of a reversible Markov 31 and convex. It also estimates the variance in the Markov chain central 36 scalar functionals of a reversible Markov chain. Thus these initial sequence 60 the asymptotic variance in the Markov chain CLT. Divide by \code{length(x)} 63 the asymptotic variance in the Markov chain CLT. Divide by \code{length(x)} 66 the asymptotic variance in the Markov chain CLT. Divide by \code{length(x)} 76 Practical Markov Chain Monte Carlo.
|
H A D | morph.metrop.Rd | 7 Markov chain Monte Carlo for continuous random vector using a 28 \code{morph.metrop} implements morphometric methods for Markov 33 run for the induced density. The Markov chain is transformed back to 35 density, instead of the original density, can result in a Markov chain 46 of \eqn{f^{-1}}. Because \eqn{f} is a diffeomorphism, a Markov chain 47 for \eqn{f_Y}{fY} may be transformed into a Markov chain for 48 \eqn{f_X}{fX}. Furthermore, these Markov chains are isomorphic 55 fY} and transforms the resulting Markov chain into a Markov chain for 115 \item{morph.final}{the final state of the Markov chain on the
|
H A D | metrop.Rd | 7 Markov chain Monte Carlo for continuous random vector using a Metropolis 23 density of the desired equilibrium distribution of the Markov chain. 24 Its first argument is the state vector of the Markov chain. Other 34 of the Markov chain is the final state from the run recorded in 37 \item{initial}{a real vector, the initial state of the Markov chain. 58 producing a Markov chain with equilibrium distribution having a specified 89 \item{final}{final state of Markov chain.} 92 \item{time}{running time of Markov chain from \code{system.time()}.} 142 \emph{Handbook of Markov Chain Monte Carlo} (Geyer, 2011). 159 of the target distribution, a valid state of the Markov chain, [all …]
|
/dports/math/openturns/openturns-1.18/python/doc/theory/data_analysis/ |
H A D | metropolis_hastings.rst | 6 | **Markov chain.** Considering a :math:`\sigma`-algebra :math:`\cA` on 7 :math:`\Omega`, a Markov chain is a process 33 | :math:`{(X_k)}_{k\in\Nset}` is a homogeneous Markov Chain of 37 Markov Chain of transition :math:`K` on :math:`(\Omega, \cA)` with 40 - :math:`K_\nu` denotes the probability distribution of the Markov 50 | **Total variation convergence.** A Markov Chain of distribution 74 Markov Chain Monte-Carlo techniques allows to sample and integrate 83 Metropolis-Hastings algorithm produces a Markov chain 87 - the transition kernel of the Markov chain is :math:`t`-invariant; 91 - the Markov chain satisfies the *ergodic theorem*: let :math:`\phi` be [all …]
|
/dports/math/R-cran-coda/coda/man/ |
H A D | raftery.diag.Rd | 21 intended for use on a short pilot run of a Markov chain. The number 57 qth quantile of U. The process \eqn{Z_t} is derived from the Markov 59 a Markov chain. However, \eqn{Z_t} may behave as a Markov chain if 62 chain \eqn{Z^k_t} behave as a Markov chain. The required sample size is 79 Implementation strategies for Markov chain Monte Carlo. 84 Practical Markov Chain Monte Carlo (W.R. Gilks, D.J. Spiegelhalter
|
/dports/graphics/py-giddy/giddy-2.3.3/giddy/tests/ |
H A D | test_mobility.py | 5 from ..markov import Markov 16 m = Markov(q5) 28 m = Markov(q5)
|
/dports/science/agrum/aGrUM-29e540d8169268e8fe5d5c69bc4b2b1290f12320/src/docs/modules/ |
H A D | bn.dox | 40 * \defgroup bn_group Markov Network 42 * \defgroup mn_inference Inference Algorithms for Markov Networks 43 * \defgroup mn_io Serialization of Markov Networks
|
/dports/math/R-cran-MSwM/MSwM/man/ |
H A D | MSM-package.Rd | 9 …Univariate Autoregressive Markov Switching Models for Linear and Generalized Models by using the E… 30 Goldfeld, S., Quantd, R. (2005). 'A Markov model for switching Regression',Journal of Econometrics… 31 Perlin, M. (2007). 'Estimation, Simulation and Forecasting of a Markov Switching Regression', (Gene…
|
/dports/math/jacop/jacop-4.8.0/src/main/java/org/jacop/examples/floats/ |
H A D | Markov.java | 50 public class Markov { class 124 Markov example = new Markov(); in main()
|
/dports/math/octave-forge-queueing/queueing/inst/ |
H A D | dtmc.m | 23 ## @cindex Markov chain, discrete time 24 ## @cindex discrete time Markov chain 26 ## @cindex Markov chain, stationary probabilities 27 ## @cindex Markov chain, transient probabilities 29 ## Compute stationary or transient state occupancy probabilities for a discrete-time Markov chain. 33 ## discrete-time Markov chain with finite state space @math{@{1, @dots{},
|
/dports/archivers/lzlib/lzlib-1.12/ |
H A D | AUTHORS | 4 Abraham Lempel and Jacob Ziv (for the LZ algorithm), Andrey Markov (for the 5 definition of Markov chains), G.N.N. Martin (for the definition of range
|
/dports/archivers/lzip/lzip-1.22/ |
H A D | AUTHORS | 4 Abraham Lempel and Jacob Ziv (for the LZ algorithm), Andrey Markov (for the 5 definition of Markov chains), G.N.N. Martin (for the definition of range
|
/dports/science/dakota/dakota-6.13.0-release-public.src-UI/docs/KeywordMetadata/ |
H A D | method-bayes_calibration-chain_diagnostics | 2 Compute diagnostic metrics for Markov chain 5 While a Markov chain produced via Monte Carlo sampling eventually converges
|
/dports/graphics/py-giddy/giddy-2.3.3/ |
H A D | README.md | 35 - Spatially explicit Markov methods: 36 - Spatial Markov and inference 37 - LISA Markov and inference 53 * [Markov based methods](notebooks/MarkovBasedMethods.ipynb) 54 * [Rank Markov methods](notebooks/RankMarkov.ipynb)
|
/dports/math/R-cran-MCMCpack/MCMCpack/inst/ |
H A D | CITATION | 4 title = "{MCMCpack}: Markov Chain Monte Carlo in {R}", 16 "MCMCpack: Markov Chain Monte Carlo in R.",
|