Package lib :: Module power
[frames] | no frames]

Module power

source code

Functions
 
calc_var_of_window() source code
 
smooth(ph, dt_in, dt_out, extra=0.0, wt=None)
Smooths a data set, simultaneously resampling to a lower sampling rate.
source code
 
smooth_guts(ph, dt_in, dt_out, w, wt=None) source code
 
old_smooth(ph, dt_in, dt_out, extra=0.0, wt=None)
Smooths a data set, simultaneously resampling to a lower sampling rate.
source code
 
local_power(d, dt_in, dt_out, extra=0.0)
THIS IS WRONG! IT PROBABLY SHOULD BE something like sqrt(d**2 + hilbert_transform(d)**2).
source code
 
test1() source code
 
test2() source code
 
test3() source code
Variables
  __package__ = 'lib'

Imports: math, numpy, hilbert_xform, P1


Function Details

smooth(ph, dt_in, dt_out, extra=0.0, wt=None)

source code 

Smooths a data set, simultaneously resampling to a lower sampling rate. It uses successive boxcar averages followed by decimations for the initial smooth, then a convolution with a Gaussian. Even if dt_out>>dt_in, it only uses O[log(dt_out/dt_in) operations.

Parameters:
  • dt_in (float (in the same units as extra and dt_out).) - input sampling rate.
  • dt_out (float (in the same units as dt_in and extra).) - output sampling rate.
  • extra (float (in the same units as dt_in and dt_out).) - extra smoothing time constant to apply. Extra is the standard deviation of a Gaussian kernel smooth that is applied as the last step. This last step is not implemented efficiently, so if if extra>>dt_out it can slow down the algorithm substantially.
  • ph (numpy.ndarray.) - Normally a 1-dimensional array containing data to be smoothed. If the data is higher-dimensional, the time axis is assumed to run along axis=0, and the return value will be an array of the same dimension.
  • ph (numpy.ndarray.) - None (which indicates a uniform weighting) or a numpy.ndarray that is the same length (axis 0) as ph.
  • wt (numpy.ndarray)
Returns:
(rv, t0) where rv is a numpy array and t0 it a float offset of the first element, relative to the start of the input data.

old_smooth(ph, dt_in, dt_out, extra=0.0, wt=None)

source code 

Smooths a data set, simultaneously resampling to a lower sampling rate. It uses successive boxcar averages followed by decimations for the initial smooth, then a convolution with a Gaussian. Even if dt_out>>dt_in, it only uses O[log(dt_out/dt_in) operations.

Parameters:
  • dt_in (float (in the same units as extra and dt_out).) - input sampling rate.
  • dt_out (float (in the same units as dt_in and extra).) - output sampling rate.
  • extra (float (in the same units as dt_in and dt_out).) - extra smoothing time constant to apply. Extra is the standard deviation of a Gaussian kernel smooth that is applied as the last step. This last step is not implemented efficiently, so if if extra>>dt_out it can slow down the algorithm substantially.
  • ph (numpy.array.) - a 1???-dimensionan array containing data to be smoothed. (Query: will this work for higher-dimensional data?)
Returns:
(rv, t0) where rv is a numpy array and t0 it a float offset of the first element, relative to the start of the input data.

local_power(d, dt_in, dt_out, extra=0.0)

source code 

THIS IS WRONG! IT PROBABLY SHOULD BE something like sqrt(d**2 + hilbert_transform(d)**2). The hilbert transform just supplies the imaginary part of an analytic function. Current code leaves out the read part!