1% Generated by roxygen2: do not edit by hand
2% Please edit documentation in R/nloptr.R
3\name{nloptr}
4\alias{nloptr}
5\title{R interface to NLopt}
6\usage{
7nloptr(x0, eval_f, eval_grad_f = NULL, lb = NULL, ub = NULL,
8  eval_g_ineq = NULL, eval_jac_g_ineq = NULL, eval_g_eq = NULL,
9  eval_jac_g_eq = NULL, opts = list(), ...)
10}
11\arguments{
12\item{x0}{vector with starting values for the optimization.}
13
14\item{eval_f}{function that returns the value of the objective function. It
15can also return gradient information at the same time in a list with
16elements "objective" and "gradient" (see below for an example).}
17
18\item{eval_grad_f}{function that returns the value of the gradient of the
19objective function. Not all of the algorithms require a gradient.}
20
21\item{lb}{vector with lower bounds of the controls (use -Inf for controls
22without lower bound), by default there are no lower bounds for any of the
23controls.}
24
25\item{ub}{vector with upper bounds of the controls (use Inf for controls
26without upper bound), by default there are no upper bounds for any of the
27controls.}
28
29\item{eval_g_ineq}{function to evaluate (non-)linear inequality constraints
30that should hold in the solution.  It can also return gradient information
31at the same time in a list with elements "constraints" and "jacobian" (see
32below for an example).}
33
34\item{eval_jac_g_ineq}{function to evaluate the jacobian of the (non-)linear
35inequality constraints that should hold in the solution.}
36
37\item{eval_g_eq}{function to evaluate (non-)linear equality constraints that
38should hold in the solution.  It can also return gradient information at the
39same time in a list with elements "constraints" and "jacobian" (see below for
40an example).}
41
42\item{eval_jac_g_eq}{function to evaluate the jacobian of the (non-)linear
43equality constraints that should hold in the solution.}
44
45\item{opts}{list with options. The option "algorithm" is required. Check the
46\href{http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms}{NLopt website}
47for a full list of available algorithms. Other options control the
48termination conditions (minf_max, ftol_rel, ftol_abs, xtol_rel, xtol_abs,
49maxeval, maxtime). Default is xtol_rel = 1e-4. More information
50\href{http://ab-initio.mit.edu/wiki/index.php/NLopt_Introduction\#Termination_conditions}{here}.
51A full description of all options is shown by the function
52\code{nloptr.print.options()}.
53
54Some algorithms with equality constraints require the option local_opts,
55which contains a list with an algorithm and a termination condition for the
56local algorithm. See ?`nloptr-package` for an example.
57
58The option print_level controls how much output is shown during the
59optimization process. Possible values: \tabular{ll}{ 0 (default) \tab no
60output \cr 1 \tab show iteration number and value of objective function \cr
612 \tab 1 + show value of (in)equalities \cr 3 \tab 2 + show value of
62controls }
63
64The option check_derivatives (default = FALSE) can be used to run to compare
65the analytic gradients with finite difference approximations.  The option
66check_derivatives_print ('all' (default), 'errors', 'none') controls the
67output of the derivative checker, if it is run, showing all comparisons,
68only those that resulted in an error, or none.  The option
69check_derivatives_tol (default = 1e-04), determines when a difference
70between an analytic gradient and its finite difference approximation is
71flagged as an error.}
72
73\item{...}{arguments that will be passed to the user-defined objective and
74constraints functions.}
75}
76\value{
77The return value contains a list with the inputs, and additional
78elements
79  \item{call}{the call that was made to solve}
80  \item{status}{integer value with the status of the optimization (0 is
81    success)}
82\item{message}{more informative message with the status of the optimization}
83\item{iterations}{number of iterations that were executed}
84\item{objective}{value if the objective function in the solution}
85\item{solution}{optimal value of the controls}
86\item{version}{version of NLopt that was used}
87}
88\description{
89nloptr is an R interface to NLopt, a free/open-source library for nonlinear
90optimization started by Steven G. Johnson, providing a common interface for
91a number of different free optimization routines available online as well as
92original implementations of various other algorithms. The NLopt library is
93available under the GNU Lesser General Public License (LGPL), and the
94copyrights are owned by a variety of authors. Most of the information here
95has been taken from \href{http://ab-initio.mit.edu/nlopt}{the NLopt website},
96where more details are available.
97}
98\details{
99NLopt addresses general nonlinear optimization problems of the form:
100
101min f(x) x in R^n
102
103s.t.  g(x) <= 0 h(x) = 0 lb <= x <= ub
104
105where f is the objective function to be minimized and x represents the n
106optimization parameters. This problem may optionally be subject to the bound
107constraints (also called box constraints), lb and ub. For partially or
108totally unconstrained problems the bounds can take -Inf or Inf. One may also
109optionally have m nonlinear inequality constraints (sometimes called a
110nonlinear programming problem), which can be specified in g(x), and equality
111constraints that can be specified in h(x). Note that not all of the
112algorithms in NLopt can handle constraints.
113}
114\note{
115See ?`nloptr-package` for an extended example.
116}
117\examples{
118
119library('nloptr')
120
121## Rosenbrock Banana function and gradient in separate functions
122eval_f <- function(x) {
123    return( 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2 )
124}
125
126eval_grad_f <- function(x) {
127    return( c( -400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]),
128                200 * (x[2] - x[1] * x[1])) )
129}
130
131
132# initial values
133x0 <- c( -1.2, 1 )
134
135opts <- list("algorithm"="NLOPT_LD_LBFGS",
136             "xtol_rel"=1.0e-8)
137
138# solve Rosenbrock Banana function
139res <- nloptr( x0=x0,
140               eval_f=eval_f,
141               eval_grad_f=eval_grad_f,
142               opts=opts)
143print( res )
144
145
146## Rosenbrock Banana function and gradient in one function
147# this can be used to economize on calculations
148eval_f_list <- function(x) {
149    return( list( "objective" = 100 * (x[2] - x[1] * x[1])^2 + (1 - x[1])^2,
150                  "gradient"  = c( -400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]),
151                                    200 * (x[2] - x[1] * x[1])) ) )
152}
153
154# solve Rosenbrock Banana function using an objective function that
155# returns a list with the objective value and its gradient
156res <- nloptr( x0=x0,
157               eval_f=eval_f_list,
158               opts=opts)
159print( res )
160
161
162
163# Example showing how to solve the problem from the NLopt tutorial.
164#
165# min sqrt( x2 )
166# s.t. x2 >= 0
167#      x2 >= ( a1*x1 + b1 )^3
168#      x2 >= ( a2*x1 + b2 )^3
169# where
170# a1 = 2, b1 = 0, a2 = -1, b2 = 1
171#
172# re-formulate constraints to be of form g(x) <= 0
173#      ( a1*x1 + b1 )^3 - x2 <= 0
174#      ( a2*x1 + b2 )^3 - x2 <= 0
175
176library('nloptr')
177
178
179# objective function
180eval_f0 <- function( x, a, b ){
181    return( sqrt(x[2]) )
182}
183
184# constraint function
185eval_g0 <- function( x, a, b ) {
186    return( (a*x[1] + b)^3 - x[2] )
187}
188
189# gradient of objective function
190eval_grad_f0 <- function( x, a, b ){
191    return( c( 0, .5/sqrt(x[2]) ) )
192}
193
194# jacobian of constraint
195eval_jac_g0 <- function( x, a, b ) {
196    return( rbind( c( 3*a[1]*(a[1]*x[1] + b[1])^2, -1.0 ),
197                   c( 3*a[2]*(a[2]*x[1] + b[2])^2, -1.0 ) ) )
198}
199
200
201# functions with gradients in objective and constraint function
202# this can be useful if the same calculations are needed for
203# the function value and the gradient
204eval_f1 <- function( x, a, b ){
205    return( list("objective"=sqrt(x[2]),
206                 "gradient"=c(0,.5/sqrt(x[2])) ) )
207}
208
209eval_g1 <- function( x, a, b ) {
210    return( list( "constraints"=(a*x[1] + b)^3 - x[2],
211                  "jacobian"=rbind( c( 3*a[1]*(a[1]*x[1] + b[1])^2, -1.0 ),
212                                    c( 3*a[2]*(a[2]*x[1] + b[2])^2, -1.0 ) ) ) )
213}
214
215
216# define parameters
217a <- c(2,-1)
218b <- c(0, 1)
219
220# Solve using NLOPT_LD_MMA with gradient information supplied in separate function
221res0 <- nloptr( x0=c(1.234,5.678),
222                eval_f=eval_f0,
223                eval_grad_f=eval_grad_f0,
224                lb = c(-Inf,0),
225                ub = c(Inf,Inf),
226                eval_g_ineq = eval_g0,
227                eval_jac_g_ineq = eval_jac_g0,
228                opts = list("algorithm"="NLOPT_LD_MMA"),
229                a = a,
230                b = b )
231print( res0 )
232
233# Solve using NLOPT_LN_COBYLA without gradient information
234res1 <- nloptr( x0=c(1.234,5.678),
235                eval_f=eval_f0,
236                lb = c(-Inf,0),
237                ub = c(Inf,Inf),
238                eval_g_ineq = eval_g0,
239                opts = list("algorithm"="NLOPT_LN_COBYLA"),
240                a = a,
241                b = b )
242print( res1 )
243
244
245# Solve using NLOPT_LD_MMA with gradient information in objective function
246res2 <- nloptr( x0=c(1.234,5.678),
247                eval_f=eval_f1,
248                lb = c(-Inf,0),
249                ub = c(Inf,Inf),
250                eval_g_ineq = eval_g1,
251                opts = list("algorithm"="NLOPT_LD_MMA", "check_derivatives"=TRUE),
252                a = a,
253                b = b )
254print( res2 )
255
256}
257\references{
258Steven G. Johnson, The NLopt nonlinear-optimization package,
259\url{http://ab-initio.mit.edu/nlopt}
260}
261\seealso{
262\code{\link[nloptr:nloptr.print.options]{nloptr.print.options}}
263  \code{\link[nloptr:check.derivatives]{check.derivatives}}
264  \code{\link{optim}}
265  \code{\link{nlm}}
266  \code{\link{nlminb}}
267  \code{Rsolnp::Rsolnp}
268  \code{Rsolnp::solnp}
269}
270\author{
271Steven G. Johnson and others (C code) \cr Jelmer Ypma (R interface)
272}
273\keyword{interface}
274\keyword{optimize}
275