1\chapter{NUMERIC: Solving numerical problems} 2\label{NUMERIC} 3\typeout{{NUMERIC: Solving numerical problems}} 4 5{\footnotesize 6\begin{center} 7Herbert Melenk \\ 8Konrad--Zuse--Zentrum f\"ur Informationstechnik Berlin \\ 9Takustra\"se 7 \\ 10D--14195 Berlin--Dahlem, Germany \\[0.05in] 11e--mail: melenk@zib.de 12\end{center} 13} 14\ttindex{NUMERIC} 15\ttindex{NUM\_SOLVE}\index{Newton's method}\ttindex{NUM\_ODESOLVE} 16\ttindex{BOUNDS}\index{Chebyshev fit} 17\ttindex{NUM\_MIN}\index{Minimum}\ttindex{NUM\_INT}\index{Quadrature} 18 19The {\small NUMERIC} package implements some numerical (approximative) 20algorithms for \REDUCE\, based on the \REDUCE\ rounded mode 21arithmetic. These algorithms are implemented for standard cases. They 22should not be called for ill-conditioned problems; please use standard 23mathematical libraries for these. 24 25\section{Syntax} 26 27\subsection{Intervals, Starting Points} 28 29Intervals are generally coded as lower bound and 30upper bound connected by the operator \verb+`..'+, usually 31associated to a variable in an 32equation.\index{Interval} 33 34\begin{verbatim} 35 x= (2.5 .. 3.5) 36\end{verbatim} 37 38means that the variable x is taken in the range from 2.5 up to 393.5. Note, that the bounds can be algebraic 40expressions, which, however, must evaluate to numeric results. 41In cases where an interval is returned as the result, the lower 42and upper bounds can be extracted by the \verb+PART+ operator 43as the first and second part respectively. 44A starting point is specified by an equation with a numeric 45righthand side, 46 47\begin{verbatim} 48 x=3.0 49\end{verbatim} 50 51If for multivariate applications several coordinates must be 52specified by intervals or as a starting point, these 53specifications can be collected in one parameter (which is then 54a list) or they can be given as separate parameters 55alternatively. The list form is more appropriate when the 56parameters are built from other \REDUCE\ calculations in an 57automatic style, while the flat form is more convenient 58for direct interactive input. 59 60\subsection{Accuracy Control} 61 62The keyword parameters $accuracy=a$ and $iterations=i$, where 63$a$ and $i$ must be positive integer numbers, control the 64iterative algorithms: the iteration is continued until 65the local error is below $10^{-a}$; if that is impossible 66within $i$ steps, the iteration is terminated with an 67error message. The values reached so far are then returned 68as the result. 69 70\section{Minima} 71 72The function to be minimised must have continuous partial derivatives 73with respect to all variables. The starting point of the search can 74be specified; if not, random values are taken instead. The steepest 75descent algorithms in general find only local minima. 76 77Syntax:\ttindex{NUM\_MIN} 78 79\begin{description} 80\item[NUM\_MIN] $(exp, var_1[=val_1] [,var_2[=val_2] \ldots]$ 81 82$ [,accuracy=a][,iterations=i]) $ 83 84or 85 86\item[NUM\_MIN] $(exp, \{ var_1[=val_1] [,var_2[=val_2] \ldots] \}$ 87 88$ [,accuracy=a][,iterations=i]) $ 89 90where $exp$ is a function expression, 91 92$var_1, var_2, \ldots$ are the variables in $exp$ and 93$val_1,val_2, \ldots$ are the (optional) start values. 94 95NUM\_MIN tries to find the next local minimum along the descending 96path starting at the given point. The result is a list 97with the minimum function value as first element followed by a list 98of equations, where the variables are equated to the coordinates 99of the result point. 100\end{description} 101 102Examples: 103 104\begin{verbatim} 105 num_min(sin(x)+x/5, x); 106 107 {4.9489585606,{X=29.643767785}} 108 109 num_min(sin(x)+x/5, x=0); 110 111 { - 1.3342267466,{X= - 1.7721582671}} 112 113 % Rosenbrock function (well known as hard to minimize). 114 fktn := 100*(x1**2-x2)**2 + (1-x1)**2; 115 num_min(fktn, x1=-1.2, x2=1, iterations=200); 116 117 {0.00000021870228295,{X1=0.99953284494,X2=0.99906807238}} 118 119\end{verbatim} 120 121\section{Roots of Functions/ Solutions of Equations} 122 123An adaptively damped Newton iteration is used to find an approximative 124zero of a function, a function vector or the solution of an equation 125or an equation system. The expressions must have continuous 126derivatives for all variables. A starting point for the iteration can 127be given. If not given, random values are taken instead. If the number 128of forms is not equal to the number of variables, the Newton method 129cannot be applied. Then the minimum of the sum of absolute squares is 130located instead. 131 132With {\tt ON COMPLEX} solutions with imaginary parts can be 133found, if either the expression(s) or the starting point 134contain a nonzero imaginary part. 135 136Syntax:\ttindex{NUM\_SOLVE} 137 138\begin{description} 139\item[NUM\_SOLVE] $(exp_1, var_1[=val_1][,accuracy=a][,iterations=i])$ 140 141or 142 143\item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\}, 144 var_1[=val_1],\ldots,var_1[=val_n]$ 145\item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$ 146 147or 148 149\item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\}, 150 \{var_1[=val_1],\ldots,var_1[=val_n]\}$ 151\item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$ 152 153where $exp_1, \ldots,exp_n$ are function expressions, 154 155 $var_1, \ldots, var_n$ are the variables, 156 157 $val_1, \ldots, val_n$ are optional start values. 158 159NUM\_SOLVE tries to find a zero/solution of the expression(s). 160Result is a list of equations, where the variables are 161equated to the coordinates of the result point. 162 163The Jacobian matrix is stored as a side effect in the shared 164variable JACOBIAN.\ttindex{JACOBIAN} 165 166\end{description} 167 168Example: 169 170\begin{verbatim} 171 num_solve({sin x=cos y, x + y = 1},{x=1,y=2}); 172 173 {X= - 1.8561957251,Y=2.856195584} 174 175 jacobian; 176 177 [COS(X) SIN(Y)] 178 [ ] 179 [ 1 1 ] 180\end{verbatim} 181 182\section{Integrals} 183 184Numerical integration uses a polyalgorithm, explained in the full 185documentation.\ttindex{NUM\_INT} 186 187\begin{description} 188\item[NUM\_INT] $(exp,var_1=(l_1 .. u_1)[,var_2=(l_2 .. u_2)\ldots]$ 189\item[\ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$ 190 191where $exp$ is the function to be integrated, 192 193$var_1, var_2 , \ldots$ are the integration variables, 194 195$l_1, l_2 , \ldots$ are the lower bounds, 196 197$u_1, u_2 , \ldots$ are the upper bounds. 198 199Result is the value of the integral. 200 201\end{description} 202 203Example: 204 205\begin{verbatim} 206 num_int(sin x,x=(0 .. pi)); 207 208 2.0000010334 209\end{verbatim} 210 211\section{Ordinary Differential Equations} 212 213A Runge-Kutta method of order 3 finds an approximate graph for 214the solution of a ordinary differential equation 215real initial value problem. 216 217Syntax:\ttindex{NUM\_ODESOLVE} 218\begin{description} 219\item[NUM\_ODESOLVE]($exp$,$depvar=dv$,$indepvar$=$(from .. to)$ 220 221$ [,accuracy=a][,iterations=i]) $ 222 223where 224 225$exp$ is the differential expression/equation, 226 227$depvar$ is an identifier representing the dependent variable 228(function to be found), 229 230$indepvar$ is an identifier representing the independent variable, 231 232$exp$ is an equation (or an expression implicitly set to zero) which 233contains the first derivative of $depvar$ wrt $indepvar$, 234 235$from$ is the starting point of integration, 236 237$to$ is the endpoint of integration (allowed to be below $from$), 238 239$dv$ is the initial value of $depvar$ in the point $indepvar=from$. 240 241The ODE $exp$ is converted into an explicit form, which then is 242used for a Runge-Kutta iteration over the given range. The 243number of steps is controlled by the value of $i$ 244(default: 20). 245If the steps are too coarse to reach the desired 246accuracy in the neighbourhood of the starting point, the number is 247increased automatically. 248 249Result is a list of pairs, each representing a point of the 250approximate solution of the ODE problem. 251\end{description} 252 253 254Example: 255 256\begin{verbatim} 257 258 num_odesolve(df(y,x)=y,y=1,x=(0 .. 1), iterations=5); 259 260 {{0.0,1.0},{0.2,1.2214},{0.4,1.49181796},{0.6,1.8221064563}, 261 262 {0.8,2.2255208258},{1.0,2.7182511366}} 263 264\end{verbatim} 265 266 267\section{Bounds of a Function} 268 269Upper and lower bounds of a real valued function over an 270interval or a rectangular multivariate domain are computed 271by the operator \f{BOUNDS}. Some knowledge 272about the behaviour of special functions like ABS, SIN, COS, EXP, LOG, 273fractional exponentials etc. is integrated and can be evaluated 274if the operator BOUNDS is called with rounded mode on 275(otherwise only algebraic evaluation rules are available). 276 277If BOUNDS finds a singularity within an interval, the evaluation 278is stopped with an error message indicating the problem part 279of the expression. 280\newpage 281Syntax:\ttindex{BOUNDS} 282 283\begin{description} 284\item[BOUNDS]$(exp,var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2) \ldots])$ 285 286\item[{\it BOUNDS}]$(exp,\{var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2)\ldots]\})$ 287 288where $exp$ is the function to be investigated, 289 290$var_1, var_2 , \ldots$ are the variables of exp, 291 292$l_1, l_2 , \ldots$ and $u_1, u_2 , \ldots$ specify the area (intervals). 293 294{\tt BOUNDS} computes upper and lower bounds for the expression in the 295given area. An interval is returned. 296 297\end{description} 298 299Example: 300 301\begin{verbatim} 302 303 bounds(sin x,x=(1 .. 2)); 304 305 {-1,1} 306 307 on rounded; 308 bounds(sin x,x=(1 .. 2)); 309 310 0.84147098481 .. 1 311 312 bounds(x**2+x,x=(-0.5 .. 0.5)); 313 314 - 0.25 .. 0.75 315 316\end{verbatim} 317 318\section{Chebyshev Curve Fitting} 319 320The operator family $Chebyshev\_\ldots$ implements approximation 321and evaluation of functions by the Chebyshev method. 322 323The operator {\tt Chebyshev\_fit}\ttindex{Chebyshev\_fit} computes 324this approximation and returns a list, which has as first element the 325sum expressed as a polynomial and as second element the sequence of 326Chebyshev coefficients ${c_i}$. {\tt 327Chebyshev\_df}\ttindex{Chebyshev\_df} and {\tt 328Chebyshev\_int}\ttindex{Chebyshev\_int} transform a Chebyshev 329coefficient list into the coefficients of the corresponding derivative 330or integral respectively. For evaluating a Chebyshev approximation at 331a given point in the basic interval the operator {\tt 332Chebyshev\_eval}\ttindex{Chebyshev\_eval} can be used. Note that {\tt 333Chebyshev\_eval} is based on a recurrence relation which is in general 334more stable than a direct evaluation of the complete polynomial. 335 336\begin{description} 337\item[CHEBYSHEV\_FIT] $(fcn,var=(lo .. hi),n)$ 338 339\item[CHEBYSHEV\_EVAL] $(coeffs,var=(lo .. hi),var=pt)$ 340 341\item[CHEBYSHEV\_DF] $(coeffs,var=(lo .. hi))$ 342 343\item[CHEBYSHEV\_INT] $(coeffs,var=(lo .. hi))$ 344 345where $fcn$ is an algebraic expression (the function to be 346fitted), $var$ is the variable of $fcn$, $lo$ and $hi$ are 347numerical real values which describe an interval ($lo < hi$), 348$n$ is the approximation order,an integer $>0$, set to 20 if missing, 349$pt$ is a numerical value in the interval and $coeffs$ is 350a series of Chebyshev coefficients, computed by one of 351$CHEBYSHEV\_COEFF$, $\_DF$ or $\_INT$. 352\end{description} 353 354Example: 355 356\begin{verbatim} 357 358on rounded; 359 360w:=chebyshev_fit(sin x/x,x=(1 .. 3),5); 361 362 3 2 363w := {0.03824*x - 0.2398*x + 0.06514*x + 0.9778, 364 365 {0.8991,-0.4066,-0.005198,0.009464,-0.00009511}} 366 367chebyshev_eval(second w, x=(1 .. 3), x=2.1); 368 3690.4111 370 371\end{verbatim} 372 373\section{General Curve Fitting} 374 375The operator {\tt NUM\_FIT}\ttindex{NUM\_FIT} finds for a set of 376points the linear combination of a given set of 377functions (function basis) which approximates the 378points best under the objective of the least squares 379criterion (minimum of the sum of the squares of the deviation). 380The solution is found as zero of the 381gradient vector of the sum of squared errors. 382 383Syntax: 384 385\begin{description} 386\item[NUM\_FIT] $(vals,basis,var=pts)$ 387 388where $vals$ is a list of numeric values, 389 390$var$ is a variable used for the approximation, 391 392$pts$ is a list of coordinate values which correspond to $var$, 393 394$basis$ is a set of functions varying in $var$ which is used 395 for the approximation. 396 397\end{description} 398 399The result is a list containing as first element the 400function which approximates the given values, and as 401second element a list of coefficients which were used 402to build this function from the basis. 403 404Example: 405 406\begin{verbatim} 407 408 % approximate a set of factorials by a polynomial 409 pts:=for i:=1 step 1 until 5 collect i$ 410 vals:=for i:=1 step 1 until 5 collect 411 for j:=1:i product j$ 412 413 num_fit(vals,{1,x,x**2},x=pts); 414 415 2 416 {14.571428571*X - 61.428571429*X + 54.6,{54.6, 417 418 - 61.428571429,14.571428571}} 419 420 num_fit(vals,{1,x,x**2,x**3,x**4},x=pts); 421 422 4 3 423 {2.2083333234*X - 20.249999879*X 424 425 2 426 + 67.791666154*X - 93.749999133*X 427 428 + 44.999999525, 429 430 {44.999999525, - 93.749999133,67.791666154, 431 432 - 20.249999879,2.2083333234}} 433 434 435\end{verbatim} 436 437\section{Function Bases} 438 439The following procedures compute sets of functions 440for example to be used for approximation. 441All procedures have 442two parameters, the expression to be used as $variable$ 443(an identifier in most cases) and the 444order of the desired system. 445The functions are not scaled to a specific interval, but 446the $variable$ can be accompanied by a scale factor 447and/or a translation 448in order to map the generic interval of orthogonality to another 449({\em e.g.\ }$(x- 1/2 ) * 2 pi$). 450The result is a function list with ascending order, such that 451the first element is the function of order zero and (for 452the polynomial systems) the function of order $n$ is the $n+1$-th 453element. 454 455\ttindex{monomial\_base}\ttindex{trigonometric\_base}\ttindex{Bernstein\_base} 456\ttindex{Legendre\_base}\ttindex{Laguerre\_base}\ttindex{Hermite\_base} 457\ttindex{Chebyshev\_base\_T}\ttindex{Chebyshev\_base\_U} 458\begin{verbatim} 459 460 monomial_base(x,n) {1,x,...,x**n} 461 trigonometric_base(x,n) {1,sin x,cos x,sin(2x),cos(2x)...} 462 Bernstein_base(x,n) Bernstein polynomials 463 Legendre_base(x,n) Legendre polynomials 464 Laguerre_base(x,n) Laguerre polynomials 465 Hermite_base(x,n) Hermite polynomials 466 Chebyshev_base_T(x,n) Chebyshev polynomials first kind 467 Chebyshev_base_U(x,n) Chebyshev polynomials second kind 468 469\end{verbatim} 470 471Example: 472 473\begin{verbatim} 474 Bernstein_base(x,5); 475 476 5 4 3 2 477 { - X + 5*X - 10*X + 10*X - 5*X + 1, 478 479 4 3 2 480 5*X*(X - 4*X + 6*X - 4*X + 1), 481 482 2 3 2 483 10*X *( - X + 3*X - 3*X + 1), 484 485 3 2 486 10*X *(X - 2*X + 1), 487 488 4 489 5*X *( - X + 1), 490 491 5 492 X } 493 494\end{verbatim} 495 496