|
ActiveTcl User Guide
|
|
|
[ Main table Of Contents | Tcllib Table Of Contents | Tcllib Index ]
math::optimize(n) 0.2 "Math"
math::optimize - Optimisation routines
TABLE OF
CONTENTS
SYNOPSIS
DESCRIPTION
PROCEDURES
NOTES
EXAMPLES
KEYWORDS
COPYRIGHT
package require Tcl 8.2
package require math::optimize ?0.2?
This package implements several optimisation algorithms:
- Minimize or maximize a function over a given interval
- Solve a linear program (maximize a linear function subject to
linear constraints)
The package is fully implemented in Tcl. No particular attention
has been paid to the accuracy of the calculations. Instead, the
algorithms have been used in a straightforward manner.
This document describes the procedures and explains their
usage.
Note: The linear programming algorithm is described but
not yet operational.
This package defines the following public procedures:
- ::math::optimize::minimize begin end func maxerr
- Minimize the given (continuous) function by examining the
values in the given interval. The procedure determines the values
at both ends and in the centre of the interval and then constructs
a new interval of 1/2 length that includes the minimum. No
guarantee is made that the global minimum is found.
The procedure returns the "x" value for which the function is
minimal.
This procedure has been deprecated - use min_bound_1d
instead
begin - Start of the interval
end - End of the interval
func - Name of the function to be minimized (a
procedure taking one argument).
maxerr - Maximum relative error (defaults to
1.0e-4)
- ::math::optimize::maximize begin end func maxerr
- Maximize the given (continuous) function by examining the
values in the given interval. The procedure determines the values
at both ends and in the centre of the interval and then constructs
a new interval of 1/2 length that includes the maximum. No
guarantee is made that the global maximum is found.
The procedure returns the "x" value for which the function is
maximal.
This procedure has been deprecated - use max_bound_1d
instead
begin - Start of the interval
end - End of the interval
func - Name of the function to be maximized (a
procedure taking one argument).
maxerr - Maximum relative error (defaults to
1.0e-4)
- ::math::optimize::min_bound_1d
func begin end ?-relerror reltol? ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
- Miminizes a function of one variable in the given interval. The
procedure uses Brent's method of parabolic interpolation, protected
by golden-section subdivisions if the interpolation is not
converging. No guarantee is made that a global minimum is
found. The function to evaluate, func, must be a
single Tcl command; it will be evaluated with an abscissa appended
as the last argument.
x1 and x2 are the two bounds
of the interval in which the minimum is to be found. They need not
be in increasing order.
reltol, if specified, is the desired upper bound
on the relative error of the result; default is 1.0e-7. The given
value should never be smaller than the square root of the machine's
floating point precision, or else convergence is not guaranteed. abstol, if specified, is the desired upper bound on
the absolute error of the result; default is 1.0e-10. Caution must
be used with small values of abstol to avoid
overflow/underflow conditions; if the minimum is expected to lie
about a small but non-zero abscissa, you consider either shifting
the function or changing its length scale.
maxiter may be used to constrain the number of
function evaluations to be performed; default is 100. If the
command evaluates the function more than maxiter
times, it returns an error to the caller.
traceFlag is a Boolean value. If true, it causes
the command to print a message on the standard output giving the
abscissa and ordinate at each function evaluation, together with an
indication of what type of interpolation was chosen. Default is 0
(no trace).
- ::math::optimize::min_unbound_1d
func begin end ?-relerror reltol? ?-abserror abstol? ?-maxiter maxiter? ?-trace traceflag?
- Miminizes a function of one variable over the entire real
number line. The procedure uses parabolic extrapolation combined
with golden-section dilatation to search for a region where a
minimum exists, followed by Brent's method of parabolic
interpolation, protected by golden-section subdivisions if the
interpolation is not converging. No guarantee is made that a
global minimum is found. The function to evaluate, func, must be a single Tcl command; it will be
evaluated with an abscissa appended as the last argument.
x1 and x2 are two initial
guesses at where the minimum may lie. x1 is the
starting point for the minimization, and the difference between x2 and x1 is used as a hint at
the characteristic length scale of the problem.
reltol, if specified, is the desired upper bound
on the relative error of the result; default is 1.0e-7. The given
value should never be smaller than the square root of the machine's
floating point precision, or else convergence is not guaranteed. abstol, if specified, is the desired upper bound on
the absolute error of the result; default is 1.0e-10. Caution must
be used with small values of abstol to avoid
overflow/underflow conditions; if the minimum is expected to lie
about a small but non-zero abscissa, you consider either shifting
the function or changing its length scale.
maxiter may be used to constrain the number of
function evaluations to be performed; default is 100. If the
command evaluates the function more than maxiter
times, it returns an error to the caller.
traceFlag is a Boolean value. If true, it causes
the command to print a message on the standard output giving the
abscissa and ordinate at each function evaluation, together with an
indication of what type of interpolation was chosen. Default is 0
(no trace).
- ::math::optimize::solveLinearProgram constraints objective
- Solve a linear program in standard form using a
straightforward implementation of the Simplex algorithm. (In the
explanation below: The linear program has N constraints and M
variables).
The procedure returns a list of M values, the values for which the
objective function is maximal or a single keyword if the linear
program is not feasible or unbounded (either "unfeasible" or
"unbounded")
constraints - Matrix of coefficients plus
maximum values that implement the linear constraints. It is
expected to be a list of N lists of M+1 numbers each, M
coefficients and the maximum value.
objective - The M coefficients of the objective
function
Several of the above procedures take the names of
procedures as arguments. To avoid problems with the
visibility of these procedures, the fully-qualified name
of these procedures is determined inside the optimize routines. For
the user this has only one consequence: the named procedure must be
visible in the calling procedure. For instance:
|
namespace eval ::mySpace {
namespace export calcfunc
proc calcfunc { x } { return $x }
}
#
# Use a fully-qualified name
#
namespace eval ::myCalc {
puts [min_bound_1d ::myCalc::calcfunc $begin $end]
}
#
# Import the name
#
namespace eval ::myCalc {
namespace import ::mySpace::calcfunc
puts [min_bound_1d calcfunc $begin $end]
}
|
The simple procedures minimum and maximum have
been deprecated: the alternatives are much more flexible, robust
and require less function evaluations.
Let us take a few simple examples:
Determine the maximum of f(x) = x^3 exp(-3x), on the interval
(0,10):
|
proc efunc { x } { expr {$x*$x*$x * exp(-3.0*$x)} }
puts "Maximum at: [::math::optimize::max_bound_1d efunc 0.0 10.0]"
|
The maximum allowed error determines the number of steps taken
(with each step in the iteration the interval is reduced with a
factor 1/2). Hence, a maximum error of 0.0001 is achieved in
approximately 14 steps.
An example of a linear program is:
Optimise the expression 3x+2y, where:
|
x >= 0 and y >= 0 (implicit constraints, part of the
definition of linear programs)
x + y <= 1 (constraints specific to the problem)
2x + 5y <= 10
|
This problem can be solved as follows:
|
set solution [::math::optimize::solveLinearProgram { { 1.0 1.0 1.0 }
{ 2.0 5.0 10.0 } } { 3.0 2.0 }]
|
Note, that a constraint like:
can be turned into standard form using:
The theory of linear programming is the subject of many a text
book and the Simplex algorithm that is implemented here is the
best-known method to solve this type of problems, but it is not the
only one.
linear program , math , maximum , minimum , optimization
Copyright © 2004 Arjen Markus
<arjenmarkus@users.sourceforge.net>
Copyright © 2004 Kevn B. Kenny
<kennykb@users.sourceforge.net>