8

Advanced Techniques for Time Series Data Feature Engineering

 11 months ago
source link: https://hackernoon.com/advanced-techniques-for-time-series-data-feature-engineering
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

@teenl0ve

Valentine Shkulov

Data Science expert with desire to help companies advance...


Receive Stories from @teenl0ve

Introduction

Time series data is a key component in a wide range of domains, including finance and meteorology. Effective feature engineering is critical in preparing time series data for machine learning models.

In this article, we delve into advanced techniques for time series feature engineering, such as Fourier transform, wavelet transformation, derivatives, and autocorrelation.

These techniques assist in uncovering hidden structures and trends, capturing both time and frequency information, and measuring the linear relationship between data points at varying lags.

Techniques

Fourier Transform

Fourier transform is a mathematical technique that decomposes a time series signal into its frequency components. It's based on the principle that any signal can be broken down into a series of sinusoidal waves with varying amplitudes, frequencies, and phases.

By capturing the periodic patterns in the data, the Fourier transform can help identify hidden structures and trends that might be useful for prediction.

The Fourier Transform can be categorized into two types: the continuous Fourier Transform (CFT) for continuous signals and the discrete Fourier Transform (DFT) for discrete signals.

Fast Fourier Transform (FFT) is an efficient algorithm for computing the DFT of a sequence.

The power spectral density, obtained from the squared magnitudes of the Fourier transform, can be used as a feature in machine learning models to improve their performance, and here is a simple usage example for FFT:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.fft import fft, ifft


time_series = np.random.random(200)

# Perform Fast Fourier Transform (FFT)
fft_values = fft(time_series)

# Get the magnitude and frequencies
fft_magnitude = np.abs(fft_values)
frequencies = np.fft.fftfreq(len(time_series))

# Plot the frequency spectrum
plt.plot(frequencies, fft_magnitude)
plt.xlabel('Frequency')
plt.ylabel('Magnitude')
plt.title('Frequency Spectrum')
plt.show()

# Filter out the low magnitude frequencies
threshold = 7
fft_values_filtered = fft_values.copy()
fft_values_filtered[fft_magnitude < threshold] = 0

# Perform the inverse Fast Fourier Transform (IFFT)
filtered_time_series = ifft(fft_values_filtered)

# Plot the original and filtered time series
plt.plot(time_series, label='Original')
plt.plot(filtered_time_series.real, label='Filtered')
plt.xlabel('Time')
plt.ylabel('Value')
plt.title('Time Series Forecasting')
plt.legend()
plt.show()

Wavelet Transformation

Wavelet transformation is a mathematical technique used to decompose a signal into different frequency components. It is particularly useful in time series analysis because it allows us to capture both time and frequency information in the data.

Unlike a Fourier decomposition which always uses complex exponential (sine and cosine) basis functions, a wavelet decomposition uses a time-localized oscillatory function as the analyzing or mother wavelet.

The most common wavelet transformation is the Continuous Wavelet Transform (CWT).

CWT is defined as:

The wavelet function can be chosen based on the type of data and the desired features. A popular choice is the Morlet wavelet, a product of a complex exponential and a Gaussian function, defined as:

Scipy example to compute the CWT using the Morlet wavelet:

from scipy import signal
import matplotlib.pyplot as plt

t = np.linspace(-1, 1, 200, endpoint=False)
sig = np.cos(2 * np.pi * 7 * t) + signal.gausspulse(t - 0.4, fc=2)
widths = np.arange(1, 31)

cwtmatr = signal.cwt(sig, signal.morlet2, widths)

real, imag = np.real(cwtmatr), np.imag(cwtmatr)

fig, axes = plt.subplots(ncols=2, figsize=(20, 5))
axes[0].imshow(
    real,
    extent=[-1, 1, 1, 31],
    cmap="PRGn",
    aspect="auto",
    vmax=abs(real).max(),
    vmin=-abs(real).max(),
)
axes[1].imshow(
    imag,
    extent=[-1, 1, 1, 31],
    cmap="PRGn",
    aspect="auto",
    vmax=abs(imag).max(),
    vmin=-abs(imag).max(),
)

Derivatives

Derivatives can be used to describe the rate of change in time series data. For example, the first derivative represents the velocity, while the second derivative represents acceleration. By incorporating these features, we can better capture the dynamics of the time series data.

The first-order derivative represents the instantaneous rate of change of a variable with respect to time. It can help identify the direction and magnitude of the trend in the data. As it’s shown on the slide, it can be used as a marker for detecting changes in the stock or market behavior regimes.

The second-order derivative represents the rate of change of the first-order derivative. It can help identify acceleration or deceleration in the trend and detect points of inflection.

Also, here it needs to be mentioned, Seasonal derivatives and Cross-derivative: help to capture the seasonal variations in the data and pairwise variables influencing.

To compute the derivatives, we can use the finite difference method:

It’s the simplest method to compute the first derivative with pandas and use as a feature:

time_series = pd.Series(data, index=dates)

# Calculate the first-order derivative
time_series_diff = time_series.diff().dropna()

# Combine the original time series and the first-order derivative
data = pd.concat([time_series.shift(1), time_series_diff], axis=1).dropna()
data.columns = ['y_t', 'first_derivative']

Here's an example of using the first-order derivative as an indicator for regime changes. When the derivative changes its sign, it signifies a change in the regime:

Autocorrelation and Partial Autocorrelation

Autocorrelation, also known as the autocovariance function or serial correlation, measures the linear relationship between the time series data points at different lags.

Partial autocorrelation, also known as the partial autocovariance function, on the other hand, measures the correlation between two data points at different lags after removing the effect of other data points at smaller lags.

Code example for autocorrelation and partial autocorrelation:

from statsmodels.tsa.stattools 
import acf, pacf 
data = np.array([...])  
# Input time series data 
lags = 10 
autocorr = acf(data, nlags=lags) 
partial_autocorr = pacf(data, nlags=lags)

f, ax = plt.subplots(nrows=2, ncols=1, figsize=(width, 2*height))
plot_acf(data,lags=lag_acf, ax=ax[0])
plot_pacf(data,lags=lag_pacf, ax=ax[1], method='ols')
plt.tight_layout()
plt.show()

Conclusion

Advanced feature engineering techniques for time series data can significantly improve the performance of machine learning models.

Fourier transform, wavelet transformation, derivatives, and autocorrelation each contribute unique insights into the underlying structure and trends of temporal data.

By leveraging these techniques, analysts can create more accurate and efficient forecasting models, ultimately leading to better decision-making in various domains.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK