Multiplicative autocorrelation in stationary Markov processes 0000-0002-5014-4809 Ellerman E. Castedo castedo@castedo.com 10 4 2021 © 2022, Ellerman et al 2022 Ellerman et al https://creativecommons.org/licenses/by/4.0/ This document is distributed under a Creative Commons Attribution 4.0 International license.

DOCUMENT TYPE: Open Study Answer

QUESTION: For any stationary markov process, is the autocorrelation of an interval the product of the autocorrelations of subintervals?

Summary

A stationary process Zt has multiplicative autocorrelation when Cor[Zt,Zr]=Cor[Zt,Zs]Cor[Zs,Zr] for all tsr. Autocorrelation is defined as Cor[Zt,Zs]:=Cov[Zt,Zs]σ2 with σ2=Var(Zt).

A stationary autoregressive process has multiplicative autocorrelation 1. However, not all stationary Markov processes have multiplicative autocorrelation. See the section below about a real-valued 3-state Markov chain for a counterexample.

Among discrete-time stationary processes, only autoregressive processes have multiplicative autocorrelation. Some Markov processes are not obviously autoregressive processes even though technically they are. For example, all stationary real-valued two-state Markov chains are autoregressive (and thus also have multiplicative autocorrelation).

Multiplicative autocorrelation implies autoregression

Consider any real-valued discrete-time stationary Markov process Zt and translate it to Zt:=ZtE[Zt] without loss of generality.

Let σ2:=Var(Zt)ρ:=Cov[Zt,Zt+1]/σ2

Multiplicative autocorrelation implies Cor[Zt,Zt+n]=ρnCov[Zt,Zt+n]=ρnσ2

Define what will be shown to be "white noise" of Zt as autoregressive process: ϵt:=ZtρZt1 By convenient translation, E[Zt]=0ϵt=0Cov[Zt,Zs]=E[ZtZs]E[Zt2]=σ2E[ZtZt+1]=ρ

Consider any 0]]>n>0. E[ϵtϵt+n]=E[(ZtρZt1)(Zt+nρZt+n1)]=E[ZtZt+n]+ρ2E[Zt1Zt+n1]ρ(E[ZtZt+n1]+E[Zt1Zt+n])=(1+ρ2)ρnσ2ρ(ρn1σ2+ρn+1σ2)=0 thus ϵt satisfies the "white noise" condition for expressing Zt as the autoregressive process Zt+1=ρZt+ϵt QED

Real-valued 2-state Markov chain

For any stationary two-state Markov chain 1 Zt, Cor[Zt,Z0]=Cor[Zt,Zs]Cor[Zs,Z0]

Proof Let q1:=P(Zt=a1)q0:=P(Zt=a0) Map Zt to a more convenient Yt:=Zta0a1a0 Since Yt only equals 0 or 1: E[Yt]=E[Yt2]=q1 and thus Var(Yt)=q1q12=q1q0 For convenience let p0:=P(Y1=0Y0=1)p1:=P(Y1=1Y0=0)s:=p0+p1 Since Yt is stationary, it follows that qi=pi/s for i{0,1}. In preparation for induction, assume P(Yt=1Y0=1)=q1+q0(1s)tP(Yt=1Y0=0)=q1q1(1s)t It must follow that P(Yt+1=1Y0=1)=P(Yt+1=1Y1=1)(1p0)+P(Yt+1=1Y1=0)p0=[q1+q0(1s)t](1p0)+[q1q1(1s)t]p0=q1+[q0(1p0)q1p0](1s)t=q1+[q0(1p0)(1q0)p0](1s)t=q1+[q0p0](1s)t=q1+[q0q0s](1s)t=q1+q0(1s)t+1 and P(Yt+1=1Y0=0)=P(Yt+1=1Y1=1)p1+P(Yt+1=1Y1=0)(1p1)=[q1+q0(1s)t]p1+[q1q1(1s)t](1p1)=q1+[q0p1q1(1p1)](1s)t=q1+[(1q1)p1q1(1p1)](1s)t=q1+[p1q1](1s)t=q1+[q1sq1](1s)t=q1q1(1s)t+1 which completes induction, noting the base case of t=0 is true.

Due to the convenient mapping to Yt, E[YtY0]=P(Yt=1Y0=1)P(Y0=1)=(q1+q0(1s)t)q1=q12+q0q1(1s)t thus Cov[Yt,Y0]=E[YtY0]E[Yt]E[Y0]=q12+q0q1(1s)tq12=q0q1(1s)tCor[Yt,Y0]=(1s)t QED

Counterexample of Real-Valued 3-State Markov Chain

Let Zt be a stationary Markov process such that P(Zt=1)=P(Zt=0)=P(Zt=1)=1/3 P(Zt+1=0Zt=1)=1/2P(Zt+1=1Zt=0)=1/2P(Zt+1=1Zt=1)=1/2 and for all i{1,0,1}, P(Zt+1=iZt=i)=1/2

Conveniently E[Zt]=0, thus Cov[Zt,Zs]=E[ZtZs]. For one time step we have P(Z1=1Z0=1)=(1/3)(1/2)P(Z1=1Z0=1)=(1/3)(1/2)P(Z1=1Z0=1)=0P(Z1=1Z0=1)=(1/3)(1/2) thus autocorrelation of one time step must be positive: E[Z1Z0]=(11)16+(11)16+(11)16=16 For two time steps P(Z2=1Z0=1)=(1/3)(1/2)2P(Z2=1Z0=1)=(1/3)[(1/2)2+(1/2)2]P(Z2=1Z0=1)=(1/3)(1/2)2P(Z2=1Z0=1)=(1/3)(1/2)2 thus the autocorrelation for two time steps must be negative: E[Z2Z0]=(11)112+(11)212+(11)112+(11)112=112 thus Cor[Z2,Z0]Cor[Z2,Z1]Cor[Z1,Z0]

References Hamilton James D. Time series analysis Princeton University Press Princeton, N.J 1994 978-0-691-04289-3