Home
last modified time | relevance | path

Searched +refs:linear +refs:predictors (Results 1 – 25 of 657) sorted by relevance

12345678910>>...27

/dports/math/R-cran-VGAM/VGAM/man/
H A Dzero.Rd7 model certain linear/additive predictors as intercept-only.
17 certain linear/additive predictors to be an intercept only.
56 set \{\code{1:M}\} where \code{M} is the number of linear/additive
57 predictors. Full details about constraint matrices can be found in
69 none of the linear/additive predictors are modelled as
90 Reduced-rank vector generalized linear models.
H A Dvglmff-class.Rd11 In the following, \eqn{M} is the number of linear/additive
12 predictors.
26 Both use the linear/additive predictors.
91 returns the fitted values, given the linear/additive predictors.
107 given the fitted values, returns the linear/additive predictors.
182 with respect to the linear/additive predictors, i.e., the
192 with respect to the linear/additive predictors.
282 linear/additive predictors to be an intercept term only, etc.
H A Dweightsvglm.Rd18 that inherits from a \emph{vector generalized linear model} (VGLM),
61 with respect to the linear predictors. The working weights
70 If one wants to perturb the linear predictors then the
88 log-likelihood with respect to the linear predictors.
101 % Reduced-rank vector generalized linear models.
H A Dvglm-class.Rd5 \description{ Vector generalized linear models. }
12 In the following, \eqn{M} is the number of linear predictors.
23 \item{\code{predictors}:}{Object of class \code{"matrix"}
24 with \eqn{M} columns which holds the \eqn{M} linear predictors. }
183 extract the linear predictors or
184 predict the linear predictors at a new data frame.}
202 Reduced-rank vector generalized linear models.
H A Dzapoisson.Rd68 % at all. Specifies which of the two linear/additive predictors are
70 % By default, both linear/additive predictors are modelled using
94 For one response/species, by default, the two linear/additive
95 predictors for \code{zapoisson()}
103 (i) the order of the linear/additive predictors is switched so the
151 Reduced-rank vector generalized linear models with two linear predictors.
H A DUtilitiesVGAM.Rd25 Numeric. The total number of linear/additive predictors, called
38 Numeric. The number of linear/additive predictors for one response, called
86 of the linear/additive predictors depending on the number of responses.
H A Dpredictvglm.Rd6 Predicted values based on a vector generalized linear model (VGLM)
27 to predict. If omitted, the fitted linear predictors are used.
34 meaning on the scale of the linear predictors.
46 of linear predictors.
51 linear predictor scale.
142 Reduced-rank vector generalized linear models.
H A Dacat.Rd29 By default, the linear/additive predictors used are
40 linear/additive predictors are modelled as intercepts only.
54 \eqn{M} is the number of linear/additive predictors
H A Dgrc.Rd41 By default, the first linear/additive predictor
46 All other linear/additive predictors are fitted using an intercept-only,
51 linear/additive predictors.
54 linear predictors for an ordinary (usually univariate) response,
81 Specifies which linear predictor is modelled as the sum of an
135 The number of linear predictors of the \pkg{VGAM} \code{family}
137 Then the number of linear predictors of the \code{rcim()} fit is
160 viz. the number of linear/additive
161 predictors in total.
269 Reduced-rank vector generalized linear models.
[all …]
H A Dgev.Rd38 called \eqn{A} below; and then the linear/additive predictor is
53 % Then the linear/additive predictor is
156 linear/additive predictors are modelled as intercepts only.
159 If \code{zero = NULL} then all linear/additive predictors are modelled as
160 a linear combination of the explanatory variables.
246 Vector generalized linear and additive extreme value models.
285 having \code{M1} linear predictors per (independent) response.
288 6 linear predictors and it is possible to constrain the
289 linear predictors so that the answer is similar to \code{gev()}.
H A Dfff.Rd39 % linear/additive predictors are modelled as intercepts only.
42 % By default all linear/additive predictors are modelled as
43 % a linear combination of the explanatory variables.
H A Dconstraints.Rd66 the linear/additive predictors in VGLM/VGAM
88 \eqn{M} is the number of linear/additive predictors,
109 none of the linear/additive predictors are modelled as
138 Reduced-rank vector generalized linear models.
/dports/misc/orange3/orange3-3.29.1/doc/data-mining-library/source/reference/
H A Dregression.rst8 .. index:: .. index:: linear fitter
9 pair: regression; linear fitter
16 the values of several predictors. The model assumes that the response
17 variable is a linear combination of the predictors, the task of
18 linear regression is therefore to fit the unknown coefficients.
35 .. autoclass:: Orange.regression.linear.LinearRegressionLearner
36 .. autoclass:: Orange.regression.linear.RidgeRegressionLearner
37 .. autoclass:: Orange.regression.linear.LassoRegressionLearner
38 .. autoclass:: Orange.regression.linear.SGDRegressionLearner
39 .. autoclass:: Orange.regression.linear.LinearModel
[all …]
/dports/math/R-cran-ipred/ipred/man/
H A Dslda.Rd9 distributed linear scores.
22 predictors.}
34 This function implements the LDA for \eqn{q}-dimensional linear scores of
35 the original \eqn{p} predictors derived from the \eqn{PC_q} rule by Laeuter
40 original \eqn{p} predictors: \eqn{XD_q}. By default, \eqn{q} is the number
41 of eigenvalues greater one. The \eqn{q}-dimensional linear scores are
42 left-spherically distributed and are used as predictors for a classical
/dports/devel/R-cran-Hmisc/Hmisc/man/
H A Dtransace.Rd57 vectors). Specify linear transformations by enclosing variables by
89 transformation of the response (obtained by reverse linear
115 approximates the mapping of linear predictors to means over an evenly
252 linear interpolation on the tabulated nonparametric response
287 a list of vectors of settings of the predictors, for predictors for
290 predictors. Example:
309 the predictors used in the fit. For \code{\link{factor}} predictors
313 linear predictors (on the transformed response scale) and fitted
516 # The nomogram will show the linear predictor, median, mean.
530 # This is a table look-up with linear interpolation
[all …]
H A Dareg.Rd30 transformation is linear. Comparing bootstrap or cross-validated mean
32 linear (\code{ytype='l'}) may help the analyst choose the proper model
48 A single predictor or a matrix of predictors. Categorical
49 predictors are required to be coded as integers (as \code{factor}
60 \code{"l"} for no transformation (linear), or \code{"c"} for
69 which will fit 3 parameters to continuous variables (one linear term
82 \item{whichx}{integer or character vector specifying which predictors
162 # Examine overfitting when true transformations are linear
195 # True transformation of x1 is quadratic, y is linear
210 # Overfit 20 predictors when no true relationships exist
[all …]
/dports/devel/R-cran-caret/caret/man/
H A DtwoClassSim.Rd42 \item{noiseVars}{The number of uncorrelated irrelevant predictors to be
45 \item{corrVars}{The number of correlated irrelevant predictors to be
53 \item{factors}{Should the binary predictors be converted to factors?}
74 predictors (\code{J}, \code{K} and \code{L} above).} \item{Linear1,
75 }{Optional uncorrelated standard normal predictors (\code{C} through
80 normal predictors (each with unit variances)}\item{list()}{Optional
85 important predictors and irrelevant predictions.
98 The second set of effects are linear with coefficients that alternate signs
110 systems and use two more predictors (\code{K} and \code{L}):
122 For \code{ordinal = TRUE}, random normal errors are added to the linear
[all …]
/dports/math/R/R-4.1.2/src/library/stats/demo/
H A D00Index1 glm.vr Some glm() examples from V&R with several predictors
2 lm.glm Some linear and generalized linear modelling
/dports/math/libRmath/R-4.1.1/src/library/stats/demo/
H A D00Index1 glm.vr Some glm() examples from V&R with several predictors
2 lm.glm Some linear and generalized linear modelling
/dports/math/jags/classic-bugs/vol2/jaw/
H A DReadMe1 In the linear and quadratic models, age has been centred to improve
5 beta1 and beta2. However, the linear predictors mu[1] .. mu[4] are
/dports/math/R-cran-car/car/man/
H A DboxTidwell.Rd10 Computes the Box-Tidwell power transformations of the predictors in a
11 linear model.
29 predictors to be transformed.}
30 \item{other.x}{one-sided formula giving the predictors that are \emph{not}
43 \item{x1}{matrix of predictors to transform.}
44 \item{x2}{matrix of predictors that are \emph{not} candidates for transformation.}
/dports/math/R-cran-VGAM/VGAM/R/
H A Drrvglm.fit.q103 stop("all linear predictors are linear in the ",
351 " linear loop ", iter, ": ", criterion, "= ")
429 " linear loop ", iter, ": ", criterion, "= ")
498 tfit$predictors <- matrix(tfit$predictors, n, M)
532 dim(tfit$predictors) <- c(n, M)
544 dimnames(residuals) <- list(yn, predictors.names)
547 residuals <- z - tfit$predictors
549 tfit$predictors <- as.vector(tfit$predictors)
554 dimnames(tfit$predictors) <- list(yn, predictors.names)
612 predictors.names = predictors.names,
[all …]
/dports/math/R-cran-robustbase/robustbase/man/
H A Dpredict.glmrob.Rd6 predictions from a fitted \emph{robust} generalized linear model (GLM)
18 which to predict. If omitted, the fitted linear predictors are used.}
20 scale of the linear predictors; the alternative \code{"response"}
25 fitted values of each term in the model formula on the linear predictor
/dports/math/R-cran-recipes/recipes/man/
H A Dstep_impute_linear.Rd5 \title{Impute numeric variables via a linear model}
39 \item{models}{The \code{\link[=lm]{lm()}} objects are stored here once the linear models
57 create linear regression models to impute missing data.
60 For each variable requiring imputation, a linear model is fit
61 where the outcome is the variable of interest and the predictors are any
69 Since this is a linear regression, the imputation model only uses complete
70 cases for the training set predictors.
/dports/math/py-statsmodels/statsmodels-0.13.1/docs/source/
H A Dmixed_glm.rst7 linear models with random effects in the linear predictors.
19 Unlike statsmodels mixed linear models, the GLIMMIX implementation is

12345678910>>...27