lebamath

.back to index

MUSIC (Multiple Signal Classification) Algorithm

MUSIC (Multiple Signal Classification) is a high-resolution spectral estimation algorithm based on subspace decomposition, commonly used in DOA (Direction of Arrival) estimation, f-k spectrum analysis, radar signal processing, etc. It utilizes the orthogonality between the signal subspace and noise subspace to distinguish different signal wave-beams k or direction angles \(\theta\), achieving super-resolution capability to resolve closely spaced wave-beams or arrival angles.

Basic Idea of MUSIC Algorithm

The MUSIC algorithm is based on the eigenvalue decomposition of the signal’s covariance matrix \(\boldsymbol R_y\), separating the signal subspace from the noise subspace. It then uses the orthogonality between the signal and noise subspaces to construct a pseudo-spectrum, and determines the signal’s wave-beams or angles through peak searching.

Steps of MUSIC Algorithm

(1) Constructing the Covariance Matrix

Assume that the received signal \(\mathbf{y}(t)\) consists of \(M\) plane wave signals and noise:

\[\mathbf y(t)=\sum_{i=1}^M s_i(t) \mathbf a(k_i)+ \mathbf n(t) \tag{1}\]

where:

Let the number of antennas be N.

Defining:

\[\mathbf A = [\mathbf{a}(k_1), \mathbf{a}(k_2), \dots, \mathbf{a}(k_M)], \quad \mathbf S(t) = [s_1(t),s_2(t),\dots,s_M(t)]^\text T\tag{2}\]

the equation above can be rewritten as:

\[\mathbf y(t)= \mathbf A \boldsymbol{S}(t) + \mathbf n(t)\tag{3}\]

The spatial covariance matrix of the signal is computed as:

\[\boldsymbol R_y=\text{E}[\mathbf y \mathbf y^H]\tag{4}\]

Expanding it using the previous equation:

\[\boldsymbol{R} = \text{E}\left [\left (\mathbf A \boldsymbol{S}(t) + \mathbf n(t) \right )\left (\mathbf A \boldsymbol{S}(t) + \mathbf n(t)\right )^H\right ]\tag{5}\]

If the signals are stationary, and the noise is zero-mean Gaussian white noise, and signals are uncorrelated with noise, then:

\[\boldsymbol R_y=\boldsymbol A \boldsymbol R_s \boldsymbol A^H+\sigma^2 \boldsymbol I \tag{6}\]

where:

(2) Eigenvalue Decomposition (EVD)

Perform eigenvalue decomposition on \(\boldsymbol R_y\):

\[\boldsymbol R_y = \boldsymbol U \boldsymbol\Lambda \boldsymbol U^H \tag{7}\]

where:

In this decomposition:

Thus,

\[\boldsymbol R_y = \boldsymbol U_s \boldsymbol\Lambda_s \boldsymbol U_s^H + \boldsymbol U_n \boldsymbol\Lambda_n \boldsymbol U_n^H \tag{8}\]

(3) Compute the MUSIC Pseudo-Spectrum

Since the noise subspace \(\boldsymbol U_n\) is orthogonal to the signal’s array manifold \(\mathbf{a}(k)\) , i.e.,

\[\boldsymbol U_n^H \mathbf{a}(k) \approx 0\tag{9}\]

the MUSIC pseudo-spectrum can be constructed as:

\[P_{\text{MUSIC}}(k) = \frac{1}{\mathbf{a}^H(k) \boldsymbol U_n \boldsymbol U_n^H \mathbf{a}(k)}\tag{10}\]

where:

(4) Peak Search for Estimating Wave-beam

The MUSIC pseudo-spectrum exhibits peaks at the true wave-beams $k_i$, thus:

Reciprocal Structure of the Pseudo-Spectrum

\[\mathbf{a}^H(k) \boldsymbol U_n \boldsymbol U_n^H \mathbf{a}(k) \tag{11}\]

✅ Therefore, MUSIC pseudo-spectrum exhibits peaks at true wave-beams!

Meaning of Eigenvalue Decomposition of the Covariance Matrix

The MUSIC algorithm aims to find the \(M\) manifold vectors \(\textbf{a}(k_i)\) from equation (1), which corresponds to finding the matrix \(\mathbf{A}\) in equation (2).

Next, we analyze equations (6) and (8).

1) Structure of the Matrix \(A R_s A^H\)

Revisiting the signal covariance matrix:

\[\boldsymbol R_y = \boldsymbol A \boldsymbol R_s \boldsymbol A^H + \sigma_n^2 \boldsymbol I\]

where:

\[\boldsymbol A = [\mathbf{a}(k_1), \mathbf{a}(k_2), ..., \mathbf{a}(k_M)]\] \[\boldsymbol R_s = \text{diag}(\lambda_1, \lambda_2, ..., \lambda_M)\]

where \(\lambda_i\) represents the power of the signal sources.

We focus on:

\[\boldsymbol A \boldsymbol R_s \boldsymbol A^H\]

which is an \(N \times N\) matrix, but its rank is at most M.

2) Why is the Rank of \(\boldsymbol A \boldsymbol R_s \boldsymbol A^H\) at Most M?

(a) Basic Definition of Rank

The rank of a matrix is the number of linearly independent column vectors, i.e., the dimension of the column space.

(b) Examining the Rank of A

(c) After Multiplying by \(R_s\)

\[\boldsymbol A \boldsymbol R_s\] \[\boldsymbol A \boldsymbol R_s \boldsymbol A^H\]

still only contains M independent directions, as it is formed by linear combinations of A.

(d) Conclusion

3) Why is the Column Space of \(A R_s A^H\) Spanned by A?

Since:

\[A R_s A^H = A (\text{diag}(\lambda_1, \lambda_2, ..., \lambda_M)) A^H\]

Step-by-step analysis:

Conclusion:

Column space(\(\boldsymbol A\boldsymbol R_s\boldsymbol A^H\)) = Column space(\(\boldsymbol A\))

4) Intuitive Understanding

You can think of \(A R_s A^H\) as a data projection process:

Thus, \(A R_s A^H\) remains in the signal subspace spanned by A, and its rank is at most M.

5) Connection to the MUSIC Algorithm

(a) Signal Subspace

Perform eigenvalue decomposition on \(R_y = A R_s A^H + \sigma_n^2 I\):

\[R_y = U \Lambda U^H\]

(b) Noise Subspace

\[U_n^H A \approx 0\]

This explains why the MUSIC pseudo-spectrum is constructed as:

\[P_{\text{MUSIC}}(k) = \frac{1}{\mathbf{a}^H(k) U_n U_n^H \mathbf{a}(k)}\]

Explanation: Orthogonality Between Noise Subspace and Signal Subspace

1) Why Are the Noise Subspace and Signal Subspace Orthogonal?

Starting from the eigenvalue decomposition (EVD) of the covariance matrix \(R_y\):

\[R_y = U_s \Lambda_s U_s^H + U_n \Lambda_n U_n^H\]

where:

This implies:

\[U_n^H U_s = 0\]

which means the noise subspace and the signal subspace are completely orthogonal.

2) Why Are the Eigenvectors of the Noise Subspace Orthogonal to the Signal Subspace?

Since eigenvectors span a subspace, if two subspaces are orthogonal, then any vector belonging to one subspace will be orthogonal to any vector in the other subspace.

In other words:

Thus, we can rigorously express this as:

\[U_n^H \mathbf{a}(k) \approx 0, \quad \forall k \in \text{signal directions}\]

This implies:

3) Summary

Uniqueness of Beams in the Signal Subspace Formed by Manifold Vectors

From the previous summary, we can see that the essence of the MUSIC algorithm is to identify the noise subspace (by determining its basis) and then use the orthogonality between the signal subspace and the noise subspace to find the true manifold vectors within the signal subspace, which correspond to the actual incident beam directions.

A key question arises: within the signal subspace, which contains infinitely many vectors orthogonal to the noise subspace, how can we determine the unique true signal vectors (manifold vectors)?

We need to prove that among all manifold vectors, there exist exactly M unique vectors orthogonal to the noise subspace.

Given N receiving antennas, each manifold vector is an \(N\times 1\) column vector. Since there are M manifold vectors (corresponding to actual incident beam directions), the noise subspace has a dimension of N − M.

Since the signal subspace is M-dimensional, we use proof by contradiction to show that there exist exactly M manifold vectors orthogonal to the noise subspace.

The matrix formed by the manifold vectors is a Vandermonde matrix. When the number of column vectors is at most N, these N column vectors are linearly independent. Since M<N, any chosen M columns are also linearly independent. The signal subspace has dimension M, meaning that at most M independent manifold vectors can be found within it.

If there were M+1 manifold vectors orthogonal to the noise subspace, they would all belong to the signal subspace. However, since the signal subspace is M-dimensional, having M+1 such vectors would imply linear dependence. Yet, from the properties of the Vandermonde matrix, any M+1 column vectors (as long as \(M+1 \leq N\)) remain linearly independent, creating a contradiction. This confirms that there exist exactly M unique manifold vectors orthogonal to the noise subspace.

Vandermonde Matrix

For an \(n \times n\) Vandermonde matrix:

\[V = \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ \lambda_1 & \lambda_2 & \lambda_3 & \cdots & \lambda_n \\ \lambda_1^2 & \lambda_2^2 & \lambda_3^2 & \cdots & \lambda_n^2 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \lambda_1^{n-1} & \lambda_2^{n-1} & \lambda_3^{n-1} & \cdots & \lambda_n^{n-1} \end{bmatrix}\]

Its determinant is given by:

\[\det(V) = \prod_{1 \leq i < j \leq n} (\lambda_j - \lambda_i)\]

The matrix formed by the manifold vectors is also a Vandermonde matrix:

\[V = \begin{bmatrix} e^{j\theta_1\times0} & e^{j\theta_2\times0} & e^{j\theta_3\times0} & \cdots & e^{j\theta_n\times0} \\ e^{j\theta_1\times1} & e^{j\theta_2\times1} & e^{j\theta_3\times1} & \cdots & e^{j\theta_n\times1} \\ e^{j\theta_1\times2} & e^{j\theta_2\times2} & e^{j\theta_3\times2} & \cdots & e^{j\theta_n\times2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ e^{j\theta_1\times(n-1)} & e^{j\theta_2\times(n-1)} & e^{j\theta_3\times(n-1)} & \cdots & e^{j\theta_n\times(n-1)} \\ \end{bmatrix}\]

This can be rewritten in a form similar to the basis of a DFT transform:

\[V = \begin{bmatrix} e^{j\frac{k_1}{n\times Q}\times0} & e^{j\frac{k_2}{n\times Q}\times0} & e^{j\frac{k_3}{n\times Q}\times0} & \cdots & e^{j\frac{k_n}{n\times Q}\times0} \\ e^{j\frac{k_1}{n\times Q}\times1} & e^{j\frac{k_2}{n\times Q}\times1} & e^{j\frac{k_3}{n\times Q}\times1} & \cdots & e^{j\frac{k_n}{n\times Q}\times1} \\ e^{j\frac{k_1}{n\times Q}\times2} & e^{j\frac{k_2}{n\times Q}\times2} & e^{j\frac{k_3}{n\times Q}\times2} & \cdots & e^{j\frac{k_n}{n\times Q}\times2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ e^{j\frac{k_1}{n\times Q}\times(n-1)} & e^{j\theta_2\times(n-1)} & e^{j\frac{k_3}{n\times Q}\times(n-1)} & \cdots & e^{j\frac{k_n}{n\times Q}\times(n-1)} \\ \end{bmatrix}\]

where \(k_i<n\times Q\), and \(Q\) is any positive integer representing the oversampling factor.

The implementation code can be found on GitHub: https://github.com/taichiorange/leba_math, under the directory:

leba_math/MIMO/MIMO-beam-detection/root-MUSIC-algorithm.py

import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import eigh

# configure
N = 8  # number of antennas
M = 5   # number of beams
k_true = np.array([0.3, 0.4,0.5,0.6,0.7])  # beam angles
SNR = 10  # 信噪比(dB)

# create a manifold vector
def steering_vector(k, N):
    n = np.arange(N)
    return np.exp(1j * 2 * np.pi * k * n)

NSamples = 100

# generate signals
X = np.zeros((N, NSamples), dtype=complex)
for i in range(M):
    a_k = steering_vector(k_true[i], N)
    s = np.exp(1j * 2 * np.pi * np.random.rand(NSamples))  # random signals
    X += np.outer(a_k, s)

# add noise
noise = (np.random.randn(N, NSamples) + 1j * np.random.randn(N, NSamples)) / np.sqrt(2)
X += noise * (10 ** (-SNR / 20))

# covariance matrix
R_y = X @ X.conj().T / X.shape[1]

# Eigenvalue Decomposition
eigvals, eigvecs = eigh(R_y)
U_n = eigvecs[:, :-M]  # noise sub-space

# MUSIC pseudo-spectrum
k_scan = np.linspace(0, 1, 500)
P_music = np.zeros_like(k_scan, dtype=float)

for i, k in enumerate(k_scan):
    a_k = steering_vector(k, N)
    P_music[i] = 1 / np.abs(a_k.conj().T @ U_n @ U_n.conj().T @ a_k)

# normalize
P_music = 10 * np.log10(P_music / np.max(P_music))

# 绘制 MUSIC 频谱
plt.plot(k_scan, P_music)
plt.xlabel("wave beam index (normalized) ")
plt.ylabel("MUSIC Spectrum (dB)")
plt.title("MUSIC High-Resolution f-k Spectrum")
plt.grid()
plt.xticks(np.arange(0, 1, 0.1))
plt.show()

MUSIC