Package gmisclib :: Module gpk_lsq :: Class reg_linear_least_squares
[frames] | no frames]

Class reg_linear_least_squares

source code


Instance Methods
 
__init__(self, a, y, regstr=0.0, regtgt=None, rscale=None, copy=True)
This solves min! |a*x - y|^2 + |regstr*(x-regtgt)|^2, and returns (x, the_fit, rank, s).
source code
 
sv_reg(self)
Singular values of the regularized problem.
source code
 
sv_unreg(self)
Singular values of the unregularized problem.
source code
float
eff_rank(self)
Returns something like the rank of the solution, but rather than counting how many dimensions can be solved at all, it reports how many dimensions can be solved with a relatively good accuracy.
source code
 
hat(self, copy=True)
Hat Matrix Diagonal Data points that are far from the centroid of the X-space are potentially influential.
source code
float
eff_n(self)
Returns something like the number of data, except that it looks at their weighting and the structure of the problem. (Inherited from gmisclib.gpk_lsq.lls_base)
source code
 
fit(self, copy=False) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
residual(self) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
set_y(self, y, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
variance_about_fit(self)
Returns the estimator of the standard deviation of the data about the fit. (Inherited from gmisclib.gpk_lsq.lls_base)
source code
 
x(self, y=None, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
y(self, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Properties

Inherited from object: __class__

Method Details

__init__(self, a, y, regstr=0.0, regtgt=None, rscale=None, copy=True)
(Constructor)

source code 

This solves min! |a*x - y|^2 + |regstr*(x-regtgt)|^2, and returns (x, the_fit, rank, s). Normally, a.shape==(m,n) and y.shape==(m,q), where m is the number of data to be fit, n is the number of parameters to use in a fit (equivalently, the number of basis functions), and q is the number of separate sets of equations that you are fitting. Then, x has shape (n,q) and the_fit has shape (m,q).

The regularization target, regtgt is the same shape as x, that is (n,q). (It must be a vector if and only if y is a vector.) Regstr, the strength of the regularization is normally an (n,n) matrix, though (*,n) will work, as will a scalar.

Y may be a 1-D matrix (a vector), in which case the fit is a vector. This is the normal case where you are fitting one equation. If y is a 2-D matrix, each column (second index) in y is a separate fit, and each column in the solution is a separate result.

Overrides: object.__init__

eff_rank(self)

source code 

Returns something like the rank of the solution, but rather than counting how many dimensions can be solved at all, it reports how many dimensions can be solved with a relatively good accuracy.

Returns: float
Overrides: lls_base.eff_rank
(inherited documentation)

hat(self, copy=True)

source code 

Hat Matrix Diagonal Data points that are far from the centroid of the X-space are potentially influential. A measure of the distance between a data point, xi, and the centroid of the X-space is the data point's associated diagonal element hi in the hat matrix. Belsley, Kuh, and Welsch (1980) propose a cutoff of 2 p/n for the diagonal elements of the hat matrix, where n is the number of observations used to fit the model, and p is the number of parameters in the model. Observations with hi values above this cutoff should be investigated. For linear models, the hat matrix

H = X (X'X)-1 X'

can be used as a projection matrix. The hat matrix diagonal variable contains the diagonal elements of the hat matrix

hi = xi (X'X)-1 xi'

Overrides: lls_base.hat