Package gmisclib :: Module gpk_lsq :: Class linear_least_squares
[frames] | no frames]

Class linear_least_squares

source code


Instance Methods
 
__init__(self, a, y=None, minsv=None, minsvr=None, copy=True)
This solves the set of linear equations a*x = y, and allows you to get properties of the fit via methods.
source code
 
sv(self) source code
 
rank(self) source code
 
hat(self, copy=True)
Hat Matrix Diagonal Data points that are far from the centroid of the X-space are potentially influential.
source code
 
x_variances(self)
Estimated standard deviations of the solution.
source code
float
eff_rank(self)
Returns something like the rank of the solution, but rather than counting how many dimensions can be solved at all, it reports how many dimensions can be solved with a relatively good accuracy.
source code
float
eff_n(self)
Returns something like the number of data, except that it looks at their weighting and the structure of the problem. (Inherited from gmisclib.gpk_lsq.lls_base)
source code
 
fit(self, copy=False) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
residual(self) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
set_y(self, y, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
variance_about_fit(self)
Returns the estimator of the standard deviation of the data about the fit. (Inherited from gmisclib.gpk_lsq.lls_base)
source code
 
x(self, y=None, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code
 
y(self, copy=True) (Inherited from gmisclib.gpk_lsq.lls_base) source code

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Properties

Inherited from object: __class__

Method Details

__init__(self, a, y=None, minsv=None, minsvr=None, copy=True)
(Constructor)

source code 

This solves the set of linear equations a*x = y, and allows you to get properties of the fit via methods. Normally, a.shape==(m,n) and y.shape==(m,q), and the returned x.shape==(n,q). where m is the number of constraints provided by the data, n is the number of parameters to use in a fit (equivalently, the number of basis functions), and q is the number of separate sets of equations that you are fitting. Then, self.x() has shape (n,q) and self.the_fit() has shape (m,q). Interpreting this as a linear regression, there are n parameters in the model, and m measurements. Q is the number of times you apply the model to a different data set; each on yields a different solution.

The procedure uses a singular value decomposition algorithm, and treats all singular values that are smaller than minsv as zero (i.e. drops them). If minsvr is specified, it treates all singular values that are smaller than minsvr times the largest s.v. as zero. The returned rank is the rank of the solution, which is normally the number of nonzero elements in x. Note that the shape of the solution vector or matrix is defined by a and y, and the rank can be smaller than m.

Overrides: object.__init__

Note: Y may be a 1-D matrix (a vector), in which case the fit is a vector. This is the normal case where you are fitting one equation. If y is a 2-D matrix, each column (second index) in y is a separate fit, and each column in the solution is a separate result.

hat(self, copy=True)

source code 

Hat Matrix Diagonal Data points that are far from the centroid of the X-space are potentially influential. A measure of the distance between a data point, xi, and the centroid of the X-space is the data point's associated diagonal element hi in the hat matrix. Belsley, Kuh, and Welsch (1980) propose a cutoff of 2 p/n for the diagonal elements of the hat matrix, where n is the number of observations used to fit the model, and p is the number of parameters in the model. Observations with hi values above this cutoff should be investigated. For linear models, the hat matrix

H = X inv(X'X) X'

can be used as a projection matrix. The hat matrix diagonal variable contains the diagonal elements of the hat matrix

hi = xi inv(X'X) xi'

Overrides: lls_base.hat

x_variances(self)

source code 

Estimated standard deviations of the solution. This is the diagonal of the solution covariance matrix.

eff_rank(self)

source code 

Returns something like the rank of the solution, but rather than counting how many dimensions can be solved at all, it reports how many dimensions can be solved with a relatively good accuracy.

Returns: float
Overrides: lls_base.eff_rank
(inherited documentation)