ssspy.bss.ica#

In this module, we separate time-domain multichannel signals using independent component analysis (ICA) [1]. We denote the number of sources and microphones as \(N\) and \(M\), respectively. We also denote source, observed, and separated signals (in time-domain) as \(\boldsymbol{s}_{t}\), \(\boldsymbol{x}_{t}\), and \(\boldsymbol{y}_{t}\), respectively.

\[\begin{split}\boldsymbol{s}_{t} &= (s_{t1},\ldots,s_{tn},\ldots,s_{tN})^{\mathsf{T}}\in\mathbb{R}^{N}, \\ \boldsymbol{x}_{t} &= (x_{t1},\ldots,x_{tm},\ldots,x_{tM})^{\mathsf{T}}\in\mathbb{R}^{M}, \\ \boldsymbol{y}_{t} &= (y_{t1},\ldots,y_{tn},\ldots,y_{tN})^{\mathsf{T}}\in\mathbb{R}^{N},\end{split}\]

where \(t=1,\ldots,T\) is an index of time samples. When a mixing system is time-invariant, \(\boldsymbol{x}_{t}\) is represented as follows:

\[\boldsymbol{x}_{t} = \boldsymbol{A}\boldsymbol{s}_{t},\]

where \(\boldsymbol{A}=(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n},\ldots,\boldsymbol{a}_{N})\in\mathbb{R}^{M\times N}\) is a mixing matrix. If \(M=N\) and \(\boldsymbol{A}\) is non-singular, a demixing system is represented as

\[\boldsymbol{y}_{t} = \boldsymbol{W}\boldsymbol{x}_{t},\]

where \(\boldsymbol{W}=(\boldsymbol{w}_{1},\ldots,\boldsymbol{w}_{n},\ldots,\boldsymbol{w}_{N})^{\mathsf{T}}\in\mathbb{R}^{N\times M}\) is a demixing matrix. The negative log-likelihood of observed signals (divided by \(T\)) is computed as follows:

\[\begin{split}\mathcal{L} &= -\frac{1}{T}\log p(\mathcal{X}) \\ &= -\frac{1}{T}\left(\log p(\mathcal{Y}) \ + \log|\det\boldsymbol{W}|^{T} \right) \\ &= -\frac{1}{T}\sum_{t,n}\log p(y_{tn}) - \log|\det\boldsymbol{W}| \\ &= \frac{1}{T}\sum_{t,n}G(y_{tn}) - \log|\det\boldsymbol{W}|, \\ G(y_{tn}) &= -\log p(y_{tn}),\end{split}\]

where \(G(y_{tn})\) is a contrast function. The derivative of \(G(y_{tn})\) is called a score function.

\[\phi(y_{tn}) = \frac{\partial G(y_{tn})}{\partial y_{ijn}}.\]

Algorithms#

class ssspy.bss.ica.GradICABase(step_size=0.1, contrast_fn=None, score_fn=None, callbacks=None, record_loss=True)#

Base class of independent component analysis (ICA) using the gradient descent.

Parameters:
  • step_size (float) – A step size of the gradient descent. Default: 1e-1.

  • contrast_fn (callable) – A contrast function which corresponds to \(-\log p(y_{tn})\). This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • score_fn (callable) – A score function which corresponds to the partial derivative of the contrast function. This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • record_loss (bool) – Record the loss at each iteration of the gradient descent if record_loss=True. Default: True.

__call__(input, n_iter=100, initial_call=True, **kwargs)#

Separate a time-domain multichannel signal.

Parameters:
  • input (numpy.ndarray) – Mixture signal in time-domain. The shape is (n_channels, n_samples).

  • n_iter (int) – Number of iterations of demixing filter updates. Default: 100.

  • initial_call (bool) – If True, perform callbacks (and computation of loss if necessary) before iterations.

Return type:

ndarray

Returns:

numpy.ndarray of separated signal in time-domain. The shape is (n_sources, n_samples).

compute_logdet(demix_filter)#

Compute log-determinant of demixing filter

Parameters:

demix_filter (numpy.ndarray) – Demixing filter with shape of (n_sources, n_channels).

Return type:

ndarray

Returns:

numpy.ndarray of computed log-determinant value. The shape is (n_bins,).

compute_loss()#

Compute loss \(\mathcal{L}\).

\(\mathcal{L}\) is given as follows:

\[\begin{split}\mathcal{L} \ &= \frac{1}{T}\sum_{t,n}G(y_{tn}) \ - \log|\det\boldsymbol{W}| \\ G(y_{tn}) \ &= - \log p(y_{tn})\end{split}\]
Return type:

float

Returns:

Computed loss.

separate(input, demix_filter)#

Separate input using demixing_filter.

\[\boldsymbol{y}_{t} = \boldsymbol{W}\boldsymbol{x}_{t}\]
Parameters:
  • input (numpy.ndarray) – The mixture signal in time-domain. The shape is (n_channels, n_samples).

  • demix_filter (numpy.ndarray) – The demixing filters to separate input. The shape is (n_sources, n_channels).

Return type:

ndarray

Returns:

numpy.ndarray of the separated signal in time-domain. The shape is (n_sources, n_samples).

class ssspy.bss.ica.FastICABase(contrast_fn=None, score_fn=None, d_score_fn=None, callbacks=None, record_loss=True)#

Base class of fast independent component analysis (FastICA).

Parameters:
  • contrast_fn (callable) – A contrast function which corresponds to \(-\log p(y_{tn})\). This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • score_fn (callable) – A score function which corresponds to the partial derivative of the contrast function. This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • d_score_fn (callable) – A partial derivative of the score function. This function is expected to return the same shape tensor as the input.

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • record_loss (bool) – Record the loss at each of the fixed-point iteration if record_loss=True. Default: True.

__call__(input, n_iter=100, initial_call=True, **kwargs)#

Separate a time-domain multichannel signal.

Parameters:
  • input (numpy.ndarray) – Mixture signal in time-domain. The shape is (n_channels, n_samples).

  • n_iter (int) – Number of iterations of demixing filter updates. Default: 100.

  • initial_call (bool) – If True, perform callbacks (and computation of loss if necessary) before iterations.

Return type:

ndarray

Returns:

numpy.ndarray of the separated signal in time-domain. The shape is (n_sources, n_samples).

compute_loss()#

Compute loss \(\mathcal{L}\).

\(\mathcal{L}\) is given as follows:

\[\begin{split}\mathcal{L} \ &= \frac{1}{T}\sum_{t,n}G(y_{tn}) \\ G(y_{tn}) \ &= - \log p(y_{tn})\end{split}\]
Return type:

float

Returns:

Computed loss.

separate(input, demix_filter, use_whitening=True)#

Separate input using demixing_filter.

If use_whitening=True, we apply whitening to input mixture \(\boldsymbol{x}_{t}\).

\[\begin{split}\boldsymbol{y}_{t} &= \boldsymbol{W}\boldsymbol{z}_{t}, \\ \boldsymbol{z}_{t} &= \boldsymbol{\Lambda}^{-\frac{1}{2}} \ \boldsymbol{\Gamma}^{\mathsf{T}}\boldsymbol{x}_{t}, \\ \boldsymbol{\Lambda} &:= \mathrm{diag}(\lambda_{1},\ldots,\lambda_{m},\ldots,\lambda_{M}) \ \in\mathbb{R}^{M\times M}, \\ \boldsymbol{\Gamma} &:= (\boldsymbol{\gamma}_{1}, \ldots, \boldsymbol{\gamma}_{m}, \ldots, \boldsymbol{\gamma}_{M}) \ \in\mathbb{R}^{M\times M},\end{split}\]

where \(\lambda_{m}\) and \(\boldsymbol{\gamma}_{m}\) are an eigenvalue and eigenvector of \(\sum_{t}\boldsymbol{x}_{t}\boldsymbol{x}_{t}^{\mathsf{T}}\), respectively.

Otherwise (use_whitening=False), we do not apply whitening.

\[\boldsymbol{y}_{t} = \boldsymbol{W}\boldsymbol{x}_{t}.\]
Parameters:
  • input (numpy.ndarray) – The mixture signal in time-domain. The shape is (n_channels, n_samples).

  • demix_filter (numpy.ndarray) – The demixing filters to separate input. The shape is (n_sources, n_channels).

  • use_whitening (bool) – If use_whitening=True, use_whitening (sphering) is applied to input. Default: True.

Return type:

ndarray

Returns:

numpy.ndarray of the separated signal in time-domain. The shape is (n_sources, n_samples).

class ssspy.bss.ica.GradICA(step_size=0.1, contrast_fn=None, score_fn=None, callbacks=None, is_holonomic=False, record_loss=True)#

Independent component analysis (ICA) using the gradient descent.

Parameters:
  • step_size (float) – A step size of the gradient descent. Default: 1e-1.

  • contrast_fn (callable) – A contrast function which corresponds to \(-\log p(y_{tn})\). This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • score_fn (callable) – A score function which corresponds to the partial derivative of the contrast function. This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • is_holonomic (bool) – If is_holonomic=True, Holonomic-type update is used. Otherwise, Nonholonomic-type update is used. Default: False.

  • record_loss (bool) – Record the loss at each iteration of the gradient descent if record_loss=True. Default: True.

Examples

Update demixing filters using Holonomic-type update:

>>> def contrast_fn(y):
...     return np.abs(y)

>>> def score_fn(y):
...     return np.sign(y)

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = GradICA(
...     contrast_fn=contrast_fn,
...     score_fn=score_fn,
...     is_holonomic=True,
... )
>>> waveform_est = ica(waveform_mix, n_iter=1000)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)

Update demixing filters using Nonholonomic-type update:

>>> def contrast_fn(y):
...     return np.abs(y)

>>> def score_fn(y):
...     return np.sign(y)

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = GradICA(
...     contrast_fn=contrast_fn,
...     score_fn=score_fn,
...     is_holonomic=False,
... )
>>> waveform_est = ica(waveform_mix, n_iter=1000)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)
update_once()#

Update demixing filters once using the gradient descent.

If is_holonomic=True, demixing filters are updated as follows: :rtype: None

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}} \ -\boldsymbol{I}\right)\boldsymbol{W}^{-\mathsf{T}},\]

where

\[\begin{split}\boldsymbol{\phi}(\boldsymbol{y}_{t}) &= \left(\phi(y_{t1}),\ldots,\phi(y_{tN})\right)^{\mathsf{T}}\in\mathbb{R}^{N}, \\ \phi(y_{tn}) &= \frac{\partial G(y_{tn})}{\partial y_{tn}}, \\ G(y_{tn}) &= -\log p(y_{tn}).\end{split}\]

Otherwise (is_holonomic=False),

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\cdot\mathrm{offdiag}\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}}\right) \ \boldsymbol{W}^{-\mathsf{T}}.\]
class ssspy.bss.ica.NaturalGradICA(step_size=0.1, contrast_fn=None, score_fn=None, callbacks=None, is_holonomic=False, record_loss=True)#

Independent component analysis (ICA) using the natural gradient descent [2].

Parameters:
  • step_size (float) – A step size of the gradient descent. Default: 1e-1.

  • contrast_fn (callable) – A contrast function which corresponds to \(-\log p(y_{tn})\). This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • score_fn (callable) – A score function which corresponds to the partial derivative of the contrast function. This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • is_holonomic (bool) – If is_holonomic=True, Holonomic-type update is used. Otherwise, Nonholonomic-type update is used. Default: False.

  • record_loss (bool) – Record the loss at each iteration of the gradient descent if record_loss=True. Default: True.

Examples

Update demixing filters using Holonomic-type update:

>>> def contrast_fn(y):
...     return np.abs(y)

>>> def score_fn(y):
...     return np.sign(y)

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = NaturalGradICA(
...     contrast_fn=contrast_fn,
...     score_fn=score_fn,
...     is_holonomic=True,
... )
>>> waveform_est = ica(waveform_mix, n_iter=100)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)

Update demixing filters using Nonholonomic-type update:

>>> def contrast_fn(y):
...     return np.abs(y)

>>> def score_fn(y):
...     return np.sign(y)

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = NaturalGradICA(
...     contrast_fn=contrast_fn,
...     score_fn=score_fn,
...     is_holonomic=False,
... )
>>> waveform_est = ica(waveform_mix, n_iter=100)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)
update_once()#

Update demixing filters once using the natural gradient descent.

If is_holonomic=True, demixing filters are updated as follows: :rtype: None

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}} \ -\boldsymbol{I}\right)\boldsymbol{W},\]

where

\[\begin{split}\boldsymbol{\phi}(\boldsymbol{y}_{t}) &= \left(\phi(y_{t1}),\ldots,\phi(y_{tN})\right)^{\mathsf{T}}\in\mathbb{R}^{N}, \\ \phi(y_{tn}) &= \frac{\partial G(y_{tn})}{\partial y_{tn}}, \\ G(y_{tn}) &= -\log p(y_{tn}).\end{split}\]

Otherwise (is_holonomic=False),

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\cdot\mathrm{offdiag}\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}}\right) \ \boldsymbol{W}.\]
class ssspy.bss.ica.FastICA(contrast_fn=None, score_fn=None, d_score_fn=None, callbacks=None, record_loss=True)#

Fast independent component analysis (FastICA) [3].

In FastICA, a whitening (sphering) is applied to input signal.

\[\begin{split}\boldsymbol{z}_{t} &= \boldsymbol{\Lambda}^{-\frac{1}{2}} \ \boldsymbol{\Gamma}^{\mathsf{T}}\boldsymbol{x}_{t}, \\ \boldsymbol{\Lambda} &:= \mathrm{diag}(\lambda_{1},\ldots,\lambda_{m},\ldots,\lambda_{M}) \ \in\mathbb{R}^{M\times M}, \\ \boldsymbol{\Gamma} &:= (\boldsymbol{\gamma}_{1}, \ldots, \boldsymbol{\gamma}_{m}, \ldots, \boldsymbol{\gamma}_{M}) \ \in\mathbb{R}^{M\times M},\end{split}\]

where \(\lambda_{m}\) and \(\boldsymbol{\gamma}_{m}\) are an eigenvalue and eigenvector of \(\sum_{t}\boldsymbol{x}_{t}\boldsymbol{x}_{t}^{\mathsf{T}}\), respectively.

Furthermore, \(\boldsymbol{W}\) is constrained to be orthogonal.

\[\boldsymbol{W}\boldsymbol{W}^{\mathsf{T}} = \boldsymbol{I}\]
Parameters:
  • contrast_fn (callable) – A contrast function which corresponds to \(-\log p(y_{tn})\). This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • score_fn (callable) – A score function which corresponds to the partial derivative of the contrast function. This function is expected to receive (n_channels, n_samples) and return (n_channels, n_samples).

  • d_score_fn (callable) – A partial derivative of the score function. This function is expected to return the same shape tensor as the input.

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • record_loss (bool) – Record the loss at each of the fixed-point iteration if record_loss=True. Default: True.

Examples

>>> def contrast_fn(y):
...     return np.log(1 + np.exp(y))

>>> def score_fn(y):
...     return 1 / (1 + np.exp(-y))

>>> def d_score_fn(y):
...     sigmoid_y = 1 / (1 + np.exp(-y))
...     return sigmoid_y * (1 - sigmoid_y)

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = FastICA(contrast_fn=contrast_fn, score_fn=score_fn, d_score_fn=d_score_fn)
>>> waveform_est = ica(waveform_mix, n_iter=10)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)
update_once()#

Update demixing filters once using the fixed-point iteration algorithm.

For \(n=1,\dots,N\), the demixing flter \(\boldsymbol{w}_{n}\) is updated sequentially, :rtype: None

\[\begin{split}y_{tn} &=\boldsymbol{w}_{n}^{\mathsf{T}}\boldsymbol{z}_{t}, \\ \boldsymbol{w}_{n}^{+} &\leftarrow \frac{1}{T}\sum_{t}\phi(y_{tn})\boldsymbol{z}_{tn} \ - \frac{1}{T}\sum_{t}\frac{\partial\phi(y_{tn})}{\partial y_{tn}} \ \boldsymbol{w}_{n}, \\ \boldsymbol{w}_{n}^{+} &\leftarrow\boldsymbol{w}_{n}^{+} \ - \sum_{n'=1}^{n-1}\boldsymbol{w}_{n'}^{\mathsf{T}}\boldsymbol{w}_{n}^{+} \ \boldsymbol{w}_{n}^{+}, \\ \boldsymbol{w}_{n} &\leftarrow \frac{\boldsymbol{w}_{n}^{+}}{\|\boldsymbol{w}_{n}^{+}\|}.\end{split}\]
class ssspy.bss.ica.GradLaplaceICA(step_size=0.1, callbacks=None, is_holonomic=False, record_loss=True)#

Independent component analysis (ICA) using the gradient descent on a Laplace distribution.

We assume \(y_{ijn}\) follows a Laplace distribution.

\[p(y_{ijn})\propto\exp(|y_{ijn}|)\]
Parameters:
  • step_size (float) – A step size of the gradient descent. Default: 1e-1.

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • is_holonomic (bool) – If is_holonomic=True, Holonomic-type update is used. Otherwise, Nonholonomic-type update is used. Default: False.

  • record_loss (bool) – Record the loss at each iteration of the gradient descent if record_loss=True. Default: True.

Examples

Update demixing filters using Holonomic-type update:

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = GradLaplaceICA(is_holonomic=True)
>>> waveform_est = ica(waveform_mix, n_iter=1000)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)

Update demixing filters using Nonholonomic-type update:

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = GradLaplaceICA(is_holonomic=False)
>>> waveform_est = ica(waveform_mix, n_iter=1000)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)
compute_loss()#

Compute loss \(\mathcal{L}\).

\(\mathcal{L}\) is given as follows:

\[\begin{split}\mathcal{L} \ &= \frac{1}{T}\sum_{t,n}|y_{tn}| \ - \log|\det\boldsymbol{W}| \\\end{split}\]
Return type:

float

Returns:

Computed loss.

update_once()#

Update demixing filters once using the gradient descent.

If is_holonomic=True, demixing filters are updated as follows: :rtype: None

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}} \ -\boldsymbol{I}\right)\boldsymbol{W}^{-\mathsf{T}},\]

where

\[\boldsymbol{\phi}(\boldsymbol{y}_{t}) = \left(\mathrm{sign}(y_{t1}),\ldots,\mathrm{sign}(y_{tN})\right)^{\mathsf{T}} \ \in\mathbb{R}^{N}.\]

Otherwise (is_holonomic=False),

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\cdot\mathrm{offdiag}\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}}\right) \ \boldsymbol{W}^{-\mathsf{T}}.\]
class ssspy.bss.ica.NaturalGradLaplaceICA(step_size=0.1, callbacks=None, is_holonomic=False, record_loss=True)#

Independent component analysis (ICA) using the natural gradient descent on a Laplace distribution.

We assume \(y_{ijn}\) follows a Laplace distribution.

\[p(y_{ijn})\propto\exp(|y_{ijn}|)\]
Parameters:
  • step_size (float) – A step size of the gradient descent. Default: 1e-1.

  • callbacks (callable or list[callable], optional) – Callback functions. Each function is called before separation and at each iteration. Default: None.

  • is_holonomic (bool) – If is_holonomic=True, Holonomic-type update is used. Otherwise, Nonholonomic-type update is used. Default: False.

  • record_loss (bool) – Record the loss at each iteration of the gradient descent if record_loss=True. Default: True.

Examples

Update demixing filters using Holonomic-type update:

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = NaturalGradLaplaceICA(is_holonomic=True)
>>> waveform_est = ica(waveform_mix, n_iter=100)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)

Update demixing filters using Nonholonomic-type update:

>>> n_channels, n_samples = 2, 160000
>>> waveform_mix = np.random.randn(n_channels, n_samples)

>>> ica = NaturalGradLaplaceICA(is_holonomic=False)
>>> waveform_est = ica(waveform_mix, n_iter=100)
>>> print(waveform_mix.shape, waveform_est.shape)
(2, 160000), (2, 160000)
compute_loss()#

Compute loss \(\mathcal{L}\).

\(\mathcal{L}\) is given as follows:

\[\begin{split}\mathcal{L} \ &= \frac{1}{T}\sum_{t,n}|y_{tn}| \ - \log|\det\boldsymbol{W}| \\\end{split}\]
Return type:

float

Returns:

Computed loss.

update_once()#

Update demixing filters once using the natural gradient descent.

If is_holonomic=True, demixing filters are updated as follows: :rtype: None

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}} \ -\boldsymbol{I}\right)\boldsymbol{W},\]

where

\[\boldsymbol{\phi}(\boldsymbol{y}_{t}) = \left(\mathrm{sign}(y_{t1}),\ldots,\mathrm{sign}(y_{tN})\right)^{\mathsf{T}} \ \in\mathbb{R}^{N}.\]

Otherwise (is_holonomic=False),

\[\boldsymbol{W} \leftarrow\boldsymbol{W} - \eta\cdot\mathrm{offdiag}\left(\frac{1}{T}\sum_{t} \ \boldsymbol{\phi}(\boldsymbol{y}_{t})\boldsymbol{y}_{t}^{\mathsf{T}}\right) \ \boldsymbol{W}.\]