Changing networks: Moderated idiographic psychological networks

Laura F. Bringmann1*, Sigert Ariens2, Anja F. Ernst1, Evelien Snippe3, & Eva Ceulemans2

Received: December 27, 2023. Accepted: April 15, 2024. Published: May 4, 2024. https://doi.org/10.56296/aip00014

Abstract
Idiographic psychological networks based on intensive longitudinal data are increasingly employed in clinical practice. However, these models mainly focus on the associations among psychological variables and changes in these associations, whereas the underlying factors for those changes are not taken into account. The factors contributing to change can be studied with moderation analyses, but although such analyses are standard in clinical research, they are hardly applied in the domain of idiographic networks. Therefore, we implement the fixed moderated time series model to study how networks change depending on context factors. Fixed moderated time series analysis is a vector autoregressive based model, in which all parameters of the model can be moderated, including the innovation structure. As the model is based on the state space framework, it can also directly estimate changes in the mean levels of the variables in the network. With two empirical examples, we demonstrate how the fixed moderated time series model can reveal changing network structures. We show that this idiographic moderation approach not only provides a new way to look at what parameters in a network change over time, but also offers tools to see which factors are associated with the change.

Keywords: personalized models; moderation analysis, networks, time series, vector autoregressive model

  1. University of Groningen
  2. KU Leuven
  3. University Medical Center Groningen, University of Groningen

*Please address correspondence to l.f.bringmann@rug.nl, , University of Groningen, Faculty of Behavioural and Social Sciences, Department of Psychometrics and Statistics, Grote Kruisstraat 2/1, 9712 TS, Groningen, the Netherlands

Bringmann, L. F., Ariens, S., Ernst, A. F., Snippe, E. & Ceulemans, E. (2024) .Changing networks: Moderated idiographic psychological networks. advances.in/psychology, 2, e658296. https://doi.org/10.56296/aip00014

The current article passed one round of double-blind peer review. The anonymous review report can be found here.

 

Introduction

Intensive longitudinal data (ILD) is increasingly used to study mental health, offering insights into emotional experiences and their fluctuations over time in the context of daily life (Mehl & Conner, 2012; Myin-Germeys & Kuppens, 2022; Trull & Ebner-Priemer, 2013). Such data are collected via techniques such as Ambulatory Assessment (AA), Experience Sampling Method (ESM), and Ecological Momentary Assessment (EMA; Delespaul & Devries, 1987; Kuppens et al., 2022; Palmier-Claus et al., 2011; Trull & Ebner-Priemer, 2020). Considering these opportunities, ILD is not only employed in research settings but is also making its way into clinical practice, including feedback based on ILD (Bartels et al., 2023). Most of this feedback is descriptive. For instance, it reveals how emotions develop over time, presented in a time series plot, and compares a patient’s experienced happiness across different contexts, depicted through a bar graph (e.g., Bastiaansen et al., 2018; Kramer et al., 2014; Van der Krieke et al., 2016).

In addition to descriptive feedback, idiographic (i.e., person-specific) psychological networks are increasingly employed in clinical practice to illustrate associations among variables (e.g., symptoms, emotions, cognition, and behavior) over time (Bringmann et al., 2022; Klipstein et al., 2020; Lutz et al., 2018; Piccirillo & Rodebaugh, 2019). The expectation is that a personalized network can uncover risk factors, thus helping prevent mental disorders (Wright & Zimmermann, 2019). For example, identifying whether specific symptoms, such as sleep problems, can lead to other symptoms, such as feelings of sadness, helps reveal a potential vicious cycle between sleep and mood. Finding such interactions could guide interventions; if such dynamics are observed, improving sleep would be a good starting point to enhance mood (Borsboom, 2017a; Borsboom & Cramer, 2013; Cramer & Borsboom, 2015). This concept of not only focusing on the mean level of symptoms but also on how they influence each other (i.e., their temporal dynamics) aligns well with clinical practice (Robinaugh et al., 2020) and has therefore led to a surge in the use of idiographic networks (Bak et al., 2016; Bos et al., 2018; David et al., 2018; Epskamp et al., 2018a; Levinson et al., 2021; Reeves & Fisher, 2020; Tuin et al., 2022; Wichers et al., 2016).

Although therapists and patients express willingness to use idiographic networks, opinions in clinical practice and research are mixed (Bastiaansen et al., 2020; Blanchard & Heeren, 2022; Frumkin et al., 2020; Klipstein et al., 2020; Wichers et al., 2017). Some therapists, for example, have stated that these idiographic networks are challenging to interpret (Weermeijer et al., 2023), and concerns have been raised regarding the reliability of results due to a low number of time points relative to the number of variables in the network structure (Bulteel et al., 2018; Mansueto et al., 2022; Wright & Zimmermann, 2019).

More fundamentally, the gap from psychological network theory to the application of the right network model has been highlighted (Bringmann, 2021; Bringmann & Eronen, 2018; Burger et al., 2022a; Fried, 2020; Wright & Woods, 2020). The current underlying model of personalized networks is a form of the vector autoregressive (VAR) model. This model captures the temporal dynamics between the elements of the network through lagged relationships: the predicted relationships between a variable at one time point and variables at the next time point (Bulteel et al., 2016; Epskamp et al., 2018b). While the network illustrates how variables, such as symptoms or emotions, are associated over time, it does not depict change in the network structure. In other words, the connections between the variables in the network and their mean level are assumed to be time-invariant (i.e., stationary; Bringmann et al., 2018; Lütkepohl, 2007).

However, change is often relevant for patients and therapists using such networks. Identifying which variable in the network leads to changes in the network, so that the network structure (the temporal dynamics) can develop into a healthier state, is frequently stated as a central goal in network theories (Borsboom, 2017a). For instance, improvements in sleep may lead to a change in the network that results in more positive mood (Borsboom, 2017b).[1] In general, as patients are studied for longer time periods—over the past decade, studies have extended to around 400 time points over periods exceeding four months in various patient groups—there is not only an anticipation to observe changes in mood, behavior, and cognition, but this change is of primary interest to be modeled for these patients (e.g., Bak et al., 2016; Bos et al., 2020; Bringmann et al., 2021; Burger et al., 2022b; Dejonckheere et al., 2021; Helmich et al., 2023; Smit et al., 2023; Wichers et al., 2016, 2020).

Recent developments in psychological networks, and the underlying models stemming from time series analyses more generally, have seen advancements in the form of time-varying versions of VAR models, where the mean and temporal dynamics are allowed to change over time (Albers & Bringmann, 2020; Bringmann et al., 2018; Cabrieto et al., 2019; Chen et al., 2021; Chow, 2019; Haslbeck et al., 2021a; Molenaar et al., 2016). However, researchers developing and using such models have primarily focused on when and where there is a change in the network structure, rather than the factors underlying this change (Haslbeck et al., 2022). In other words, with time-varying network models, researchers have only aimed to determine if certain connections are changing over time and when they do so (e.g., Bak et al., 2016; Wal et al., 2023; Wichers et al., 2020).

Nevertheless, interests in clinical research include not just when and where a change occurs, but also why the change occurs. Consequently, it is more in line with clinical thinking to investigate not just if a patient’s mood and the connection between, for instance, feeling down and cheerfulness changes in the network, but also if certain (contextual) factors, such as being with others or alone, influence mood or connections in the network (Bringmann, in press; Piot et al., 2022). This in turn can not only provide insights into when a clinical intervention is needed, but also can give guidance on which factors can help in improving one’s mood (Myin-Germeys et al., 2009; Wichers et al., 2011). For example, by identifying individuals who can lift the patient’s mood through offering social support (Fischer & Van Kleef, 2010; Stadel et al., 2023).

The most common way of studying whether certain factors contribute or are associated with change is by using a form of interaction analysis—identifying relevant moderators. Although moderation analyses are standard practice in clinical research (e.g., Bolger & Laurenceau, 2013; Chen et al., 2016) and have been developed for cross-sectional networks (Haslbeck, 2022; Haslbeck et al., 2021b), they are hardly applied in clinical feedback or idiographic networks in general.

Additionally, models currently available for fitting time-varying networks are not suitable to test the moderation of all parameters in a VAR model (Bringmann et al., 2018; Haslbeck et al., 2020, 2022). VAR models based on linear or additive regression allow for moderating the lagged connections within the network (i.e., the temporal dynamics), but not the innovation structure, in other words the error part of the network model (Bringmann et al., 2018; Swanson, 2023). However, it has been argued that all parts of a network should be studied and can be relevant for clinical research and practice (Epskamp et al., 2018a). For example, the innovation variance not only reflects the extent to which the process of interest (e.g., someone’s cheerfulness) varies due to unobserved factors (e.g., the weather), but can also reflect reactivity to these factors (Jongerling et al., 2015; Koval et al., 2021). It may be, for example, that on stressful days unobserved factors, such as the weather or bad news, result in more variability in mood than similar unobserved factors on non-stressful days. Thus, relating changes in the innovation network structure to moderators (e.g., daily stress) is arguably informative for the patient and the therapist.

Furthermore, most idiographic networks do not take into account changes in the mean. This is partly due to the fact that mean changes can only be estimated indirectly, via the intercepts and temporal dynamics. Thus, a model that can immediately depict changes in the mean levels of variables could improve the clinical utility of idiographic networks. Therefore, we now turn to the fixed moderated time series model. This is a VAR-based model in which all parameters of the model can be moderated, including the innovation structure (Adolf et al., 2017). As the model is based on the state space framework, it can also directly estimate changes in the mean levels of the variables in the network.

In the following sections, we will first provide the necessary methodological background by introducing VAR-based networks in greater detail. Subsequently, we will explain the fixed moderated time series model, the model underlying this moderated idiographic network approach. We will then demonstrate how the fixed moderated time series model can be fitted to empirical data and reveal changing network structures with two examples. Specifically, we will test whether social interaction (binary moderator) and sleep quality (continuous moderator) have an impact on the emotion network of patients with depression. Furthermore, we will illustrate where in the network these changes due to the moderator manifest, such as in the connections between variables or solely in the mean levels.

The empirical data used to showcase the moderated idiographic network models come from two patients receiving treatment for depression in the TRANS-ID Recovery study, with over 400 time points collected for each individual (Helmich et al., 2023). While the data cannot be shared due to its sensitive nature (many observations within patients that cannot be fully anonymized), they are available on reasonable request. All code can be found on the Open Science Framework (OSF): https://osf.io/3aetv/?view_only=None.

Methods Background

VAR-based networks

The vector autoregressive (VAR) model is a variation of multiple regression where the independent variables are lagged forms of the dependent variables (Chatfield, 2003; Hamaker & Dolan, 2009; Lütkepohl, 2007; Stadnitski & Wild, 2019). There are as many equations in the model as dependent variables, and the dependent variable within each equation is a function of its own lagged value and the lagged values of all other dependent variables. The term ‘lagged’ refers to the fact that the values of the independent variables are the values of the dependent variables at previous time points. Lag-1 (i.e., one time point back) is the most common form of a VAR model (i.e., VAR (1); Bringmann, 2021). As the VAR(1) model is formulated in discrete time, it assumes that the time intervals between time points are equal across observations, for example, the time between successive time points is always three hours (Haan-Rietdijk et al., 2017).

A VAR(1) model can be written in matrix form as follows (Brandt & Williams, 2007):

\( \begin{equation}
\boldsymbol{y_t}= \boldsymbol{\alpha} + \boldsymbol{\Phi} \boldsymbol{y_{t-1}}+ \boldsymbol{\epsilon_t}.
\label{eq:var1}
\end{equation}
\tag{1}
\)

with m variables measured at t = 1,2,3,…,T different time points (i.e, occasions), where the values on the m variables at time point t are in a (m×1) vector yt = (y1,t,y2,t,…,ym,t)´, containing the dependent variables at time t. The values in yt at time t depend on the lagged values, yt−1, through autoregressive and cross-lagged effects, and these effects are captured by the matrix Φ, an m × m matrix. The intercept terms are contained in α, a column vector of dimensions (m ×1). The column vector t, also of dimensions (m ×1), represents the innovation terms or random shocks.

In its simplest form, the VAR model consists of two variables (hence, yt = (y1,t, y2,t)´), such as cheerful (Che) and down (Dow), representing mood

\( \begin{equation}
\begin{aligned}
Che_{t}&= \alpha_{1}+\phi_{11} Che_{t-1}+\phi_{12} Dow_{t-1}+\epsilon_{1,t}\\
Dow_{t}&= \alpha_{2}+\phi_{21} Che_{t-1}+\phi_{22} Dow_{t-1}+\epsilon_{2,t}.
\end{aligned}
\tag{2}
\label{eq:var2}
\end{equation}
\)

In a VAR model, the focus lies on capturing the temporal dynamics and dependence among variables. The diagonal elements of the Φ matrix in Equation (1) contains the autoregressive effects, denoted as φ11 and φ22 in Equation (2). These autoregressive effects provide insights into the carryover effect of mood on itself over time, controlling for the cross-lagged effects (as they are analogous to partial coefficients in standard multiple regression). For instance, a positive autoregressive effect for the variable cheerfulness suggests that the cheerfulness value at a previous time point, t−1, predicts and is similar to its current time point, denoted as t (controlling for the cross-lagged effects). A positive autoregressive effect is commonly known as inertia (Kuppens et al., 2010; Suls et al., 1998). On the contrary, an autoregressive parameter of zero indicates the absence of a carryover effect from one time point to the next, and therefore, mood would not be predicted by its own previous values.

The off-diagonal elements of the Φ matrix in Equation (1) represent the cross-lagged effects (φ12 and φ21 in Equation (2)).[2] These effects indicate the direction and strength of the temporal dependence between the variables, controlling for the autoregressive effects and other cross-lagged effects (Bringmann et al., 2018). In this context, they represent the predictive or spillover effect of cheerful at time point t −1 on down at the subsequent time point t and the other way around (controlling for the autoregressive and other cross-lagged effects).

Apart from the temporal dynamics the model also has two intercepts (α1 and α2 in Equation (2) contained in α in Equation (1)) that together with the lagged effects (contained in Φ) can be used to determine the means of the process (µ1 and µ2 contained in µ):

\( \begin{equation}
\boldsymbol{\mu}= (\boldsymbol{I} – \boldsymbol{\Phi})^{-1} \boldsymbol{\alpha}.
\label{eq:var3}
\tag{3}
\end{equation}
\)

Here, I represents a m × m identity matrix. For our two variables we need a 2×2 identity matrix and thus the equation becomes

\( \begin{equation}
\begin{bmatrix}
\mu_{1}\\
\mu_{2}
\end{bmatrix} = \left( \begin{bmatrix}
1 & 0\\
0 & 1
\end{bmatrix} – \begin{bmatrix}
\phi_{11} & \phi_{12}\\
\phi_{21} & \phi_{22}
\end{bmatrix}\right)^{-1} \begin{bmatrix}
\alpha_{1}\\
\alpha_{2}
\end{bmatrix}
\tag{4}
\end{equation}
\)

As the intercepts, α, often do not have a substantive interpretation, researchers prefer interpreting the means of the time series, µ. For example, to summarize how cheerful the person was on average over the whole time period. The mean is also interpreted as the equilibrium (or attractor) of a time series (De Haan-Rietdijk et al., 2016; Hamaker et al., 2018; Li et al., 2022; Oravecz et al., 2011).

In a VAR process, time series fluctuate around their means due to the innovations (\(\epsilon_{1,t}\) and \(\epsilon_{2,t}\)). The innovations thus reflect random perturbations that affect the time series at every time point due to internal or external situations not being included in the model. Or in other words, they capture the part of the current observations yt that cannot be explained by the previous observations yt−1. These innovations then carry over in time pushing the process away from its mean(s) and thus giving rise to the fluctuations around it. The rate at which an individual returns from these perturbations back to the mean or equilibrium is determined by the temporal dynamics (Φ; Ariens et al., 2020). Importantly, when measurement error is not separately modelled, as is the case in the models we discuss here, the innovations also include measurement error (Koval et al., 2021; Schuurman et al., 2015). The innovations are assumed to follow a multivariate normal distribution, with means of zero and a symmetric positive definite covariance matrix Σ.

Therefore, it is possible for correlations to occur between innovations. In such cases, the covariance matrix of the innovation matrix Σ contains non-zero values for symmetric elements \(\sigma^2_{12}\) and \(\sigma^2_{21}\). Correlated VAR innovations could arise for many reasons. For instance, they could arise when unobserved variables (variables not explicitly considered in the model), such as bad weather, simultaneously influence both feelings of cheerful and down, leading to an association between them (Schuurman & Hamaker, 2019). A second possibility is that there are contemporaneous relations (lag-0) between variables, which cannot be modeled directly in a standard vector autoregressive model (for an example of a model that specifies such relationships explicitly, see Beltz & Gates, 2017). Third, incorrect model specifications can also lead to correlations between innovations. For instance, using an incorrect time lag can contribute to these correlations, as the true relationship might occur at a different time scale than what is assumed in the model (Bringmann et al., 2022).

Given that there are numerous factors leading to correlations between innovations – unobserved variables, misspecified time lags, unmodeled contemporaneous effects or insufficient time lags – it is crucial to emphasize that contemporaneous effects and correlated innovations are not interchangeable terms (Brandt & Williams, 2007; Lütkepohl, 2007). This is why we will not call networks based on innovation correlations contemporaneous networks (for a different kind of interpretation, see Epskamp et al., 2018a). Although psychological network literature commonly focuses only on the parameters indicating the relationship between variables, we show how all parameters including the mean and innovation variances can be visualised as a network in Figure 1.

Figure 1

Following the conventions in the psychological network literature, nodes and edges are used to represent the vector autoregressive (VAR) model (Borsboom et al, 2021). In this network representation, we illustrate the complete VAR model for two variables \(y_{1,t}\) and \(y_{2,t}\), in this case Cheerful and Down. On the left side, edges represent the autoregressive coefficients (\(\phi_{11}\) and \(\phi_{22}\)) and cross-lagged effects (\(\phi_{12}\) and \(\phi_{21}\)). The parameters \(\mu_1\) and \(\mu_2\) in the nodes represent the means of the two variables. On the right side of the figure, the innovation structure is represented, which includes the variances of the innovations (\(\sigma^2_\epsilon\)), denoted as \(\sigma^2_{1}\) and \(\sigma^2_{2}\) in the nodes. Additionally, the edge represents the covariance between the innovations, denoted as \(\sigma^2_{12}\) (which is identical to \(\sigma^2_{21}\), as the connection is undirected).

 Network diagram showing a vector autoregressive (VAR) model for variables Cheerful and Down, illustrating parameters and innovation structure based on psychological networks and time series analysis.

Finally, an important assumption in order to use the VAR model is that the data is covariance stationary (Bringmann et al., 2018; Chatfield, 2003; Hamilton, 1994). In the case of a Gaussian distribution, this means that the first two moments, the mean and the variance, do not change over time. Concretely, for the bivariate VAR(1) model, this entails that although the process is expected to fluctuate, the mean, autoregressive, cross-lagged effects, and (co)variances of the innovations are assumed to be time-invariant. There are several ways to relax the assumption of fixed parameters over time, such as using splines or kernel methods to allow for time-varying parameters (e.g., using Φt instead of Φ; see also Ariens et al., 2020; Bringmann et al., 2017; Haslbeck et al., 2020). We focus in the next section on relaxing the assumption of time-invariant parameters through the fixed moderated time series analysis approach (Adolf et al., 2017).

Fixed Moderated Time Series Analysis

Moderation analysis

In moderation analysis, interaction effects are used to examine how the relationship between the dependent and independent variables is influenced or altered by a third variable, the moderator (Cohen et al., 2003; Dawson, 2014). Extending this concept to the time series modeling domain, particularly in the context of the (vector) autoregressive model, we investigate, in the univariate case, the relationship between yt and its lagged version (yt−1), as well as whether this relationship is influenced by a moderator at the same time point (referred to here as xt). As we focus on moderation in time series analysis, our moderator is also a covariate or variable that can vary over time. A time-varying covariate X = (x1,…,xT) could be, for instance, whether or not one is in the company of others. Following the standard multiple regression framework, moderation analysis includes both the main effect xt, as well as its interaction effect with the independent variable of interest; in this case we multiply yt−1 by moderator xt. In this way, potential changes in the intercept (α) and autoregressive effect (φ) due to the moderator can be captured. Since the moderator values are fixed and not estimated, we make the strong assumption that the moderator is error-free. This assumption implies that any measurement error in the moderator could result in inaccurate model estimations (Adolf et al., 2017). This approach to time series analysis, which utilizes a fixed moderator, is referred to as fixed moderated time series analysis (Adolf et al., 2017). In the univariate case, a fixed moderated autoregressive model can be formulated as follows:

\( \begin{equation}
y_t= \alpha + \phi y_{t-1}+ \beta_\alpha x_t + \beta_\phi y_{t-1} x_t + \epsilon_t.
\label{eq:uni}
\tag{5}
\end{equation}
\)

The main effect of xt is indicated by βα and the interaction effect of xt with the lagged variable yt−1 by βφ. Rewriting the formula as follows, it becomes clear that both the intercept and the autoregressive effect are moderated by xt (see Ariens et al., 2022):

\( \begin{equation}
\begin{aligned}
y_t&= (\alpha + \beta_\alpha x_t) + (\phi + \beta_\phi x_t)y_{t-1} + \epsilon_t.
\end{aligned}
\label{eq:mod2}
\tag{6}
\end{equation}
\)

Therefore, we could also rewrite the formula as

\( \begin{equation}
y_t= \alpha_t + \phi_t y_{t-1} + \epsilon_t
\label{eq:mod3}
\tag{7}
\end{equation}
\)

where both the intercept and the autoregressive effect now can vary over time (as indicated with t, added to both parameters) due to the moderator xt, with αt= α + βαxt and φt = φ + βφxt. The intercept and autoregressive effect are thus specified to be linear functions of the moderator (xt). For instance, in Equation (5), βφ indicates how much the autoregressive effect of yt−1 on yt changes as xt increases or decreases by one unit (Hayes and Montoya, 2017). In addition, it does not have to be presumed that the moderator has a contemporaneous effect, instead it can also have a lagged effect, such as a lag-1 effect, where xt−1 is included in the equations above instead of xt (Adolf et al., 2017; Ariens et al., 2022).

Furthermore, although not standard practice, moderation of the innovation variance can be of substantial interest. As before, the innovations (\(\epsilon_t\)) in the AR model are normally distributed with a mean of zero and a variance of \(\sigma^2 (\epsilon_t \sim N(0,\sigma^2))\). Moreover, as shown in Figure 2, the innovation variance is part of the overall variability of Y = (y1,…,yT), and therefore codetermines the variance (and thus standard deviation) of Y (ψ2), which is equal to Chatfield, 2003; Jongerling et al., 2015:

\( \begin{equation}
\psi^2= \frac{\sigma^2}{1-\phi^2}.
\label{eq:variab}
\tag{8}
\end{equation}
\)

The innovation variance can change as the process of interest (Y , e.g., someone’s cheerfulness) varies due to shifting unobserved factors. For example, when someone is in the company of others, more unobserved factors may influence cheerfulness predictions compared to when they are alone, leading to a higher innovation variance and a higher overall variability (i.e., standard deviation of Y ; see Figure; 2). A reason could be that, in a social context, additional factors like the mood of others may not be accounted for, but are relevant for predicting a person’s cheerfulness. Notably, in the case where Y represents an emotion, the innovation variance can also reflect the variety in reactivity to these unobserved factors (Hamaker et al., 2018). For instance, it may be that on stressful days unobserved factors, such as the weather or bad news, result in more variation in affect than similar unobserved factors on non-stressful days, again increasing the innovation variance and overall variability (see Figure 2; Jongerling et al., 2015; Koval et al., 2021).

Figure 2

On the left, the innovation variance of variable \(Y = (y_1, \dots, y_T)\) is not moderated by variable \(X = (x_1, \ldots, x_T)\). On the right, the innovation variance is moderated by variable \(X\). As a result, when the binary moderator is ‘on’ after 125 time points, the innovation and overall variance is higher compared to the left-side figure. The code for this simulated example is available in the file Simulated_example_figure2.html.

 Comparative diagrams illustrating the impact of a moderator on innovation variance in a vector autoregressive model, with and without the effect of the moderator after 125 time points.

In all these cases, the innovation variance, denoted as σ2, changes over time t (σt2), and thus reflects heteroskedasticity (Cohen et al., 2003). By incorporating one or multiple moderator variables (X), it becomes possible to also model changes in the innovation variance (see again Figure 2; Adolf et al., 2017):

\( \begin{equation}
{\sigma_t^{2}}=(\sigma^2 + \beta_{\sigma^2} x_t).
\tag{9}
\end{equation}
\)

This enables the identification of changes over time in the variability of Y (e.g., cheerfulness) due to exposure and/or reactivity to unobserved factors that are not included in the model.

Moving the fixed moderated time series analysis to the multivariate and thus network realm, we would get, again in its simplest form, a two variable system with the following equations (the covariance matrix of the innovations being Σt):

\( \begin{equation}
\begin{aligned}
Che_{t}&= \alpha_{1,t}+\phi_{11,t} Che_{t-1}+\phi_{12,t} Dow_{t-1}+\epsilon_{1,t}\\
Dow_{t}&= \alpha_{2,t}+\phi_{21,t} Che_{t-1}+\phi_{22,t} Dow_{t-1}+\epsilon_{2,t}
\end{aligned}
\label{eq:varmod2}
\tag{10}
\end{equation}
\)

In this vector autoregressive model, all parameters are again allowed to change (denoted by the lowercase t) conditional on a moderator xt:

\( \begin{equation}
\begin{aligned}
\begin{bmatrix}
\alpha_{1}\\
\alpha_{2}
\end{bmatrix}_t &=
\begin{bmatrix}
\alpha_{1}\\
\alpha_{2}
\end{bmatrix} + x_t \begin{bmatrix}
\beta_{\alpha,1}\\
\beta_{\alpha,2}
\end{bmatrix}
\\
\begin{bmatrix}
\phi_{11} & \phi_{12}\\
\phi_{21} & \phi_{22}
\end{bmatrix}_t &=
\begin{bmatrix}
\phi_{11} & \phi_{12}\\
\phi_{21} & \phi_{22}
\end{bmatrix} + x_t
\begin{bmatrix}
\beta_{\phi,11} & \beta_{\phi,12}\\
\beta_{\phi,21} & \beta_{\phi,22}
\end{bmatrix}\\
\begin{bmatrix}
\sigma^2_{11} & \sigma^2_{12}\\
\sigma^2_{21} & \sigma^2_{22}
\end{bmatrix}_t &=
\begin{bmatrix}
\sigma^2_{11} & \sigma^2_{12}\\
\sigma^2_{21} & \sigma^2_{22}
\end{bmatrix} + x_t \begin{bmatrix}
\beta_{\sigma^2,11} & \beta_{\sigma^2,12}
\\
\beta_{\sigma^2,21} & \beta_{\sigma^2,22}
\end{bmatrix}.
\end{aligned}
\label{eq:matrixmod2}
\tag{11}
\end{equation}
\)

Furthermore, instead of the intercept α, the mean can be written, based on Equation (3), this time using the moderated Φt and αt:

\( \begin{equation}
\boldsymbol{\mu_t}= (\boldsymbol{I} – \boldsymbol{\Phi_t})^{-1} \boldsymbol{\alpha_t}.
\label{eq:mutv}
\tag{12}
\end{equation}
\)

This results in indirect estimation of the moderated µt. In the next section, an alternative method for directly inferring the mean is introduced. The fixed moderated time series model for two variables can again be visualized as a network (see Figure 3). Compared to Figure 1, the moderator xt has now been included, with arrows indicating that all parameters are moderated.

Figure 3

Moderated network. In this network representation we illustrate the complete VAR model for two variables \(y_{1,t}\) and \(y_{2,t}\), in this case Cheerful and Down, conditioned on a moderator \(x_t\) (indicated by the blue diamond shape). On the left side, edges represent the changing autoregressive coefficients, (\(\phi_{11,t}\) and \(\phi_{22,t}\)) and cross-lagged effects (\(\phi_{12,t}\) and \(\phi_{21,t}\)). The parameters \(\mu_{1,t}\) and \(\mu_{2,t}\) in the nodes represent the means of the two variables. On the right side of the figure, the innovation structure is represented, which includes the changing variances of the innovations (denoted as \(\sigma^2_{1,t}\) and \(\sigma^2_{2,t}\) in the nodes. Additionally, the edge represents the changing covariance between the innovations, denoted as \(\sigma^2_{12,t}\) (or \(\sigma^2_{21,t}\) since the connection is undirected).

 Network diagram of a moderated vector autoregressive (VAR) model for variables Cheerful and Down, showing variable coefficients and innovations affected by the moderator xt.

State space estimation

The estimation of a fixed-moderated time series analysis can, for the most part, be carried out using the standard multiple regression ordinary least squares approach. However, when the innovation variances are changing, the ordinary least squares approach becomes inadequate because it cannot accurately estimate the changing innovation variance. This inaccuracy cascades into the incorrect estimation of variability over time ψt (see also Equation (8)), as an accurate estimation requires the innovation variance to be able to change over time. Furthermore, while the estimates of the changing intercept αt, autoregressive parameter φt (and in case of a multivariate process, also the cross-lagged parameters), and µt are still accurately estimated, the standard errors will be biased (see also Cohen et al., 2003; Hayes et al., 2007; Jongerling et al., 2015). Consequently, this can lead to erroneous conclusions about whether these parameters exhibit significant changes or not.

In this section, we therefore introduce the alternative state space framework.[3] More specifically, we will utilize the model developed by Adolf et al., 2017, which uses frequentist state space inference. State space modeling is a flexible framework that differs from standard ordinary least squares estimation, among other things, in that it allows for the modeling of latent variables. In this regard, it is similar to structural equation modeling. However, state space modelling was specifically developed for the time domain (see Chow et al., 2010 for similarities and differences between the two approaches).

According to the state space framework, two types of time series — latent and observed — describe the process that requires modeling. First there is the latent state of the process, aiming at capturing the true but not directly observable process ηt, for instance the precise value of feeling cheerful of an individual at time point t. It is assumed that the dependence or autocorrelation among the observations is induced by these latent states, the dependency in the latent affect process (Shumway & Stoffer, 2017). Second, there is the imperfectly measured observed time point yt, such as the observed affect value that still contains measurement error. The observed measurement yt is then related to the latent state ηt through Equations (13) and (14) (Auger-Méthé et al., 2021), commonly referred to as the measurement equation and the state equation respectively (Chua & Tripodis, 2022). This enables the distinction between process fluctuations due to dynamic error (i.e., innovations) and measurement error (Schuurman & Hamaker, 2019). The Appendix provides a comprehensive illustration of how the state and measurement equations can be broadly formulated to encompass various time series models, including those with time-varying parameters and measurement errors. It is important to note that, for the sake of simplicity, we have omitted measurement error in our model. We will revisit this point in the discussion section.

In our specific case, we can first write our standard VAR(1) model in state space form, with the one difference that now the mean can be estimated directly and does not need to be inferred from the intercept and autoregressive effects. This results in the measurement equation, where the mean is directly estimated (see also the Appendix and Schuurman et al., 2015)

\( \begin{equation}
\begin{aligned}
y_{1,t}&= \mu_{1}+ \eta_{1,t}\\
y_{2,t}&= \mu_{2}+ \eta_{2,t},
\end{aligned}
\label{eq:var6}
\tag{13}
\end{equation}
\)

and the state equation where there is no longer an intercept needed

\( \begin{equation}
\begin{aligned}
\eta_{1,t}&= \phi_{11} \eta_{1,t-1}+\phi_{12} \eta_{2,t-1}+\epsilon_{1,t}\\
\eta_{2,t}&= \phi_{21} \eta_{1,t-1}+\phi_{22} \eta_{2,t-1}+\epsilon_{2,t}.
\end{aligned}
\label{eq:var7}
\tag{14}
\end{equation}
\)

Building on our previous examples, both y1,t and η1,t represent the variable cheerful (the observed and latent value respectively), whereas y2,t and η2,t pertain to the variable down (the observed and latent value respectively). ηt indicates that these variables are no longer being modeled as observed but as latent states using the vector autoregressive process (see Figure 4).

Figure 4

This path diagram represents the state space representation of a VAR model with two variables, y1,t and y2,t. Panel A illustrates the direct modeling of the time-varying means (µ1,t and µ2,t) instead of requiring inference from intercepts. Additionally, panel A highlights two distinct equations: the measurement equation, which models the observable process (y1,t and y2,t), and the state or transition equation, which models the latent process (η1,t and η2,t) following a VAR model. Panel B illustrates how the time-varying parameters depend on a moderator variable (i.e., the fixed moderator variable, xt). Furthermore, it shows that all moderated parameters have an intercept when being modelled (i.e., constant of 1). For example µ1,t = µ1+ βµ,1xt where µ1,t has the intercept µ1. See also Equation (17).

Path diagram of a state space representation of a VAR model, depicting the direct modeling of time-varying means influenced by a moderator, and the associated equations.

The fixed moderated model estimated in the state space framework can then be written with the following measurement equation

\( \begin{equation}
\begin{aligned}
y_{1,t}&= \mu_{t,1}+ \eta_{1,t}\\
y_{2,t}&= \mu_{t,2}+ \eta_{2,t},
\end{aligned}
\label{eq:varStatemean}
\tag{15}
\end{equation}
\)

and state equation

\( \begin{equation}
\begin{aligned}
\eta_{1,t}&= \phi_{11,t} \eta_{1,t-1}+\phi_{12,t} \eta_{2,t-1}+\epsilon_{1,t}\\
\eta_{2,t}&= \phi_{21,t} \eta_{1,t-1}+\phi_{22,t} \eta_{2,t-1}+\epsilon_{2,t}.
\end{aligned}
\label{eq:statespace2}
\tag{16}
\end{equation}
\)

with the covariance matrix of the innovations Σt. Once more, all parameters (indicated by the sub-script t) can vary based on a moderator xt:

\( \begin{equation}
\begin{aligned}
\begin{bmatrix}
\mu_{1}\\
\mu_{2}
\end{bmatrix}_t &=
\begin{bmatrix}
\mu_{1}\\
\mu_{2}
\end{bmatrix} + x_t \begin{bmatrix}
\beta_{\mu,1}\\
\beta_{\mu,2}
\end{bmatrix}
\\
\begin{bmatrix}
\phi_{11} & \phi_{12}\\
\phi_{21} & \phi_{22}
\end{bmatrix}_t &=
\begin{bmatrix}
\phi_{11} & \phi_{12}\\
\phi_{21} & \phi_{22}
\end{bmatrix} + x_t
\begin{bmatrix}
\beta_{\phi,11} & \beta_{\phi,12}\\
\beta_{\phi,21} & \beta_{\phi,22}
\end{bmatrix}\\
\begin{bmatrix}
\sigma^2_{11} & \sigma^2_{12}\\
\sigma^2_{21} & \sigma^2_{22}
\end{bmatrix}_t &=
\begin{bmatrix}
\sigma^2_{11} & \sigma^2_{12}\\
\sigma^2_{21} & \sigma^2_{22}
\end{bmatrix} + x_t \begin{bmatrix}
\beta_{\sigma^2,11} & \beta_{\sigma^2,12}
\\
\beta_{\sigma^2,21} & \beta_{\sigma^2,22}
\end{bmatrix}.
\end{aligned}
\label{eq:matrixmod3}
\tag{17}
\end{equation}
\)

which can be visualized as a path diagram (see Figure 4), illustrating the state space structure, or again as a more straightforward representation in the form of a simplified network, as shown in Figure 3.

Importantly, the way in which we estimate the mean has implications for interpreting the moderator’s effect in the system. While the unmoderated mean remains the same whether it is indirectly inferred or directly estimated, the moderated mean takes on a different value and interpretation, depending on whether it is derived directly via Equation (3) or via Equation (13). This distinction arises because, as seen in Equations (13) and (14), as well as Figure 3, the moderated effect on the means µt is estimated in a distinct equation, the measurement equation. Consequently, the moderation effect influences the means µt at time point t, but this moderated effect does not propagate further into the system, thereby not affecting future time points via autoregressive and cross-regressive effects (Ernst et al., 2023). This can be seen in seen in Figure 5: in the model where the mean is estimated directly (the right side of the figure), we see much less peaks and valleys after the moderator is introduced, and the change is more abrupt. In contrast, in the model where the mean is estimated indirectly (the left side of the figure), there are higher peaks and values, reflecting increase in the autocorrelation, and the change into the new mean is more gradual (Ernst et al., 2023).

The consequence of this is that, even though the moderated mean can be directly estimated, it neither holds the same value nor interpretation as when it is indirectly inferred (see Simulated_example_ meanVSintercept_fig5.html: fitting the mean directly results in a mean of 12.9 instead of 15). However, within the state space model, one can opt to estimate the fixed moderated model not with the mean but simply with the intercept, as shown in Equation (10). We will compare both estimations in the empirical example.

Figure 5

A simulated example of two different effects of the moderator for one individual (for the code of the simulated example see Simulated_example_meanVSintercept_fig5.html). Left we see that the mean, estimated indirectly (via Equation 10 and 12 estimating the intercept first), feeds into the temporal dynamics, leading to higher peaks and valleys after the moderator is introduced. On the right side, when the mean is estimated directly (via Equation 13), the dynamic pattern stays the same before and after the moderator is introduced and merely the mean level changes.

Graphical representation of two different moderation effects on the mean and intercept for a simulated example in idiographic psychological network analysis.

The Kalman filter

To estimate the parameters of the fixed moderated time series model, Adolf et al. (2017) proposed applying the Kalman filter, which is commonly used to estimate the parameters of state space models (Durbin & Koopman, 2012; Kalman, 1960).

The basic idea of a Kalman filter is that uncertainty or noise in our measurements is “filtered” by comparing these measurements to the scores that one would expect based on the dynamics of the process estimated from previous measurements (Rhudy et al., 2017). In other words, the filter generates predictions of future measurements and updates these based on the actual observations at the respective time points. These updates then allow to re-estimate the model parameters governing the process dynamics. In statistical terms, the Kalman filter algorithm operates recursively, continuously updating the estimate of the state vector ηt from the previous time point t −1 to the current time point t whenever new observations yt become available (Song & Ferrer, 2009). The Kalman filter is used to estimate the latent variables ηt and gives maximum likelihood estimates for all parameters in the model (Hamaker & Grasman, 2012; Harvey, 1990).

To initiate the Kalman filter and calculate the maximum likelihood, it is necessary that the initial distribution, the latent means µ0 and the associated covariance matrix Σ0 is given for time point t = 0. Typically this initial distribution is unknown and therefore for the initial condition the mean and covariance of the observed variables are used (Gu et al., 2014; Song & Ferrer, 2009). Besides the initial distribution, all estimated parameters (the circles in Figure 4) need to be given starting values (Adolf et al., 2017). These initial condition and starting values tend to have a minimal impact on series of sufficient length as their effects decay exponentially over time (Durbin & Koopman, 2012).

One significant advantage of the Kalman filter and state space modeling is their ability to handle missing values seamlessly within time series data (Gu et al., 2014; Hamaker & Grasman, 2012). In the presence of missing data, the Kalman algorithm simply delays updates until new information becomes available, at which point it incorporates the observed values. This property also proves valuable in dealing with unevenly spaced measurements, as it can introduce missing values to align measurements at regular intervals Haan-Rietdijk et al., 2017; Hamaker et al., 2018. For instance, it can ensure a consistent three-hour gap between observations (similar to the approach in Dynamic Structural Equation Modeling, DSEM, within a Bayesian context McNeish & Hamaker, 2019. The Kalman filter makes a prediction of the next observation, based on the lagged predictors. This prediction is compared to the observation for that occasion and updated in light of it. If there is no observation, the Kalman filter simply continues with the prediction it had; if there is an observation, the Kalman filter continues with the updated prediction. In both cases, the filter moves forward to the next occasion and now makes a prediction based on the observations and predictions of the previous occasion. This method ensures that no observations are lost, even when many time points have missing values in the variables at hand.

In contrast to the process variables (Y) the moderator (X) is treated as fixed, and therefore missing values are not allowed in the moderator itself (Adolf et al., 2017). Therefore, missingness must be addressed, for instance, through imputation, before the moderator can be used in the model.

Empirical Example

Research questions

In the analyses, we aimed to illustrate how moderation can be employed to test simple clinical network hypotheses. Given that, in network analyses, affect items are typically used separately instead of in sum scores or latent factors (Borsboom & Cramer, 2013; Cramer et al., 2010; Robinaugh et al., 2020), we opted to use two commonly used affect variables, Cheerfulness and Down, for constructing the network structure (e.g., Bringmann et al., 2013; Groen et al., 2020; Snippe et al., 2017; Stochl et al., 2019; Van Roekel et al., 2019). For the moderating variables, we selected both a dichotomous (Alone) and a continuous variable (Sleep quality). We chose these variables because they are known to have an influence on mood (Hawkley & Cacioppo, 2010; Konjarski et al., 2018; Park et al., 2020; Watling et al., 2017). Our objective was to investigate which aspects of the networks are influenced by these moderators at the concurrent time point t.

More specifically, the first research question (RQ1), with Alone as the moderator, was: On which aspect of the networks consisting of Cheerfulness and Down does the moderator Alone have an influence: the means and temporal dynamics network (i.e., autoregressive and cross-lagged effects) or the innovation network (i.e., innovation variances and covariance)? The second research question (RQ2), with Sleep quality as the moderator, was: On which aspect of the networks consisting of Cheerfulness and Down does the moderator Sleep quality have an influence: the means and temporal dynamics network (i.e., autoregressive and cross-lagged effects) or the innovation network (i.e., innovation variances and covariance)?

Furthermore, for both research questions, we explored whether a model estimating the mean directly using the measurement equation fits better than a model estimating the intercept directly using a state equation. See the files RQ1.html and RQ2.html at the OSF page for the code.

Data

We applied the fixed moderated time series analysis approach to time series data of individuals included in the TRANS-ID Recovery study (https://www.transid.nl/?lang=en). Participants of the TRANSID Recovery study were individuals with a current depressive episode of which most started psychological treatment for depression during the study period. Participants engaged in ecological momentary assessment (EMA) five times a day at fixed three-hour intervals for a period of four months. The EMA included questions on momentary mental states as well as context and behavior during the past three hour interval. Written informed consent was obtained from all participants. All procedures were approved on December 12, 2016, by the Medical Ethical Committee of the University Medical Center Groningen (Registration No. NL58848.04.16). A full description of the study design, EMA protocol, and inclusion and exclusion criteria can be found on the open science framework (https://osf.io/85ngu). A flowchart of the participants and description of the sample can be found in (Helmich et al., 2023).

We selected individuals from the TRANS-ID Recovery study because these individuals were likely to show change in emotions, behaviors, and cognitions (see Snippe et al., 2024) as they started treatment for depression during the study period. A second reason for sampling individuals from this data set is that a high number of observations was available for most individuals. Individuals received 620 EMA prompts over a four-month period with a mean compliance rate of 85 percent. The two participants for the analysis were chosen purely for the purpose of illustrating the fixed-moderated time series model. Both participants were female and had a major depressive episode at the start of the study period. They received weekly psychotherapy for depression during the study period.

Specifically, we selected these participants using simple linear regression based on either a significant difference between conditions Alone and not Alone in the variable Cheerfulness, or a strong correlation between Cheerfulness and Sleep quality. We chose the participants with the largest mean difference between conditions Alone and not Alone, or the strongest correlation between Cheerfulness and Sleep quality. This selection process can be likened to cherry-picking and was aimed at effectively showcasing the model. Therefore, it is important to limit the substantive conclusions drawn from these results, and to keep in mind that their generalizability will be restricted. We refrain from providing specific characteristics of the selected participants to safeguard their privacy.

The final item set was Cheerfulness, Down, Alone, and Sleep quality. Cheerfulness(formulated as ‘I feel cheerful’), Down (formulated as ‘I feel down’), and Sleep Quality (formulated as ‘I have slept well last night’). All items were assessed using a visual analogue scale, ranging from ‘not at all’ (scored as 0) to ’very much’ (scored as 100). For the item Alone, we selected the category ‘Nobody’ from the question ‘Who am I with at the moment?’ Other available categories included: ‘Partner’, ‘Housemates’, ‘Family’, ‘Family (living elsewhere)’, ‘Friends’, ‘Colleagues or classmates’, ‘Caregiver’, ‘Acquaintances’, and ‘Strangers’.[4] The variable was scored in such a way that it assigned a score of 0 when the person was in company of others and a 1 when the person was alone (with nobody). In contrast to the other items, Sleep quality was measured only once a day.

Preprocessing steps

Before conducting the analyses, we had to perform several preprocessing steps. The first step involved creating a three-hour interval between all adjacent data points. This allowed for the interpretation of any lagged effect as a three-hour effect, which also and ensured that the last beep of the day would not predict the next morning’s beep, as the interval there is more than three hours. To establish this three-hour interval, we included missing data values between data points that were adjacent in the data, but the actual time interval between them was more than three hours (e.g., 20:00 in the evening and 8:00 in the morning). In the second step, all continuous variables were standardized (Cheerful, Down and Sleep quality). This step was necessary for model convergence (Ketkar, 2017, Chapter 8).

As the fmTSA model assumes that the moderator values are known, the third step involved handling missing data in the moderator variable. For RQ1, the Alone moderator and participant 1, we theoretically imputed missing values by assuming that when no data was available, the person was likely alone. Therefore, all missing values were imputed with a value of 1.

Regarding RQ2, the Sleep quality variable and participant 2, the imputation consisted of two parts: theoretical and multiple imputation. We began with the theoretical imputation, driven by the assumption that sleep quality likely influences mood throughout the entire day. Since we only measured sleep quality at the start of each day, and we needed the moderator to have a value throughout the day for examining its impact on mood, we simply used the sleep quality value from the first beep of the day and copied this same value for the rest of the day.[5]

However, on some occasions, participant 2 missed the first beep of the day, and as a result, we lacked a measurement of sleep quality for those days. Consequently, after the theoretical imputations, missing values still remained, for which we did not have a clear idea of what a plausible value could be. Therefore, the second part involved multiple imputation using the R-package mice (Buuren & Groothuis-Oudshoorn, 2011). Multiple imputation generates various alternative moderator values that are plausible based on the available information, including (lagged) observations of the other variables in the network (Cheerful and Down) and a lagged version of the moderator variable itself (i.e., Sleep quality; for further details, see section 3D in file RQ2.rmd). Subsequently, the analysis results are then pooled across these alternative datasets.

The process variables Cheerful and Down do not need to be imputed because missing values in these variables are handled by the Kalman filter. To study the effect of imputation, we also conducted sensitivity analyses. For RQ1, we filled in 0 instead of 1 for missing values, assuming the person was with somebody instead of alone. For RQ2, we filled in zeros everywhere instead of using multiple imputation. In the case of RQ1, this led to slightly different results (see Footnote 6), whereas for RQ2, the overall conclusions stayed the same.

Analyses

All analyses and visualizations were done in R (R Core Team, 2023). For model estimation we used the R-package fmTSA (Adolf et al., 2017) available on GitHub https://gitlab.kuleuven.be/ppw-okpiv/ researchers/u0119417/published/fmTSA, which makes use of the R-package OpenMx (Neale et al., 2016). With the function buildMTSAMxModel we applied the Kalman filter for model estimation. In order to initialize the Kalman filter we used the mean and covariance of the standardized empirical data. We began with plausible starting values, such as a positive autocorrelation below 1. Then, we employed the OpenMx function ’mxTryHard’ to iteratively refine the starting values, utilizing prior parameter estimates. For visualization of the networks we used the R-package qgraph (Epskamp et al., 2012). See the supplemental code for the exact versions of R and its packages used in the analyses.

For the analyses we report both point estimates and the 95% confidence intervals (CI) of all parameters of interest. Our approach relies on likelihood-based CI calculations implemented in OpenMx. Likelihood-based CIs utilize the exact shape of the likelihood function, as opposed to the approximations used in Wald-type CIs. This method is particularly advantageous in situations with small sample sizes (Adolf et al., 2017; Pek & Wu, 2015). However, for the innovation (co)variances, we used Wald-based CI’s based on the OpenMx-provided point and SE estimates, as calculating likelihoodbased CIs was not possible for these three parameters. Note that for the multiple imputation the confidence intervals are based on the mitml package (Grund et al., 2023).

To explore whether a model that directly estimates the mean using the measurement equation fits better than a model that only estimates the intercept directly, we employed the Akaike Information Criterion (AIC) (Akaike, 1974) and Bayesian Information Criterion (BIC) (Schwarz, 1978). These criteria balance model fit against the number of model parameters, with lower values indicating a better fit. In the case of multiple imputation, we calculated the average AIC and BIC across all model solutions.

Results: RQ1

We will start with RQ1, whether being alone had an effect on the network of participant 1. Participant 1 was measured for 121 days, resulting in a total of 605 time points, with 21% of them being missing data. After making the data equidistant by introducing additional missing values, the time series now consists of 965 time points. She was alone 68% of the time (at 328 time points) and in company 32% of the time (at 151 time points). As is common practice we will look at the raw data first. We show here only the effect on Cheerfulness, while the result for Down can be found on the OSF page in the file RQ1.html. Figure 6 displays the time series of the raw data of participant 1. Although not a consistent pattern, when she is in company, she often reports feeling more cheerful than on average (indicated by the blue circles often being above 0) compared to when she is alone. In the boxplot depicted in Figure 7, the pattern becomes evident, with being Alone associated with lower levels of Cheerfulness compared to being in company.

Figure 6

The time series of participant 1 for the variable Cheerful with the moderator Alone. The time series represents the variable Cheerful. The blue circles indicate time points when she was in company and the black circles indicate when she was alone.

Time series plot showing the influence of the moderator Alone on the variable Cheerful for participant 1, highlighting times of company versus solitude.

Figure 7

A boxplot of the raw data indicating the effect of being Alone on Cheerfullness. The red dot is the mean as calculated with the fixed moderated time series model.

Boxplot visualizing the effect of being Alone on Cheerfulness, using the fixed moderated time series model to compute the mean.

While such descriptive statistics are an important initial step, they cannot provide us with precise information about which parameters in the network structure are affected by the moderator. Therefore, we will utilize the results of the fixed moderated time series model.

First, both the AIC and the BIC indicated that a model in which the mean was estimated directly had a slightly better fit than a model where the intercept was estimated directly (see Table 1). For example, the AIC of the model with the mean instead of the intercept directly estimated was lower: 2126.370 versus 2151.195. This provides some evidence that the effect of being Alone for this specific participant primarily impacts the mean of Cheerfulness and Down at one time point but the effect does not necessarily feed forward into the rest of the process.

Table 1

AIC and BIC for the moderator Alone of participant 1. Alone (mean) refers to a model where the mean is directly estimated using the measurement equation, Alone (intercept) refers to a model where the intercept is estimated directly.

 Results presenting AIC and BIC scores for the moderation analysis of the participant 1’s psychological state when alone, comparing direct mean estimation versus intercept modeling.

Furthermore, as indicated in Table 2, being Alone is associated with lower Cheerfulness (see, βµ,1) and higher levels of feeling Down (see, βµ,2). Concerning the temporal structure, the autoregressive effects (see βφ,11 and βφ,22) were not significantly affected by the moderator being Alone, with confidence intervals that include zero.

Table 2

Results of the fixed moderated time series analysis for participant 1, with Alone as the moderator.

Detailed results of the fixed moderated time series analysis for participant 1, with moderation by the state of being alone, focusing on psychological and temporal dynamics.

However, when the participant is alone, the cross-lagged effect from Down predicting Cheerful at the next time point weakens (see Figure 8 and βφ,12 in Table 2). In company, she had a negative cross-lagged effect from Down to Cheerful (i.e., φ12 in Table 2), which means that when she feels more Down at one time point, this predicts feeling less Cheerful at the next time point. Vice versa, feeling less Down at one time point leads to her feeling more Cheerful at the next time point, when in company (controlling for the other effects in the network). When she is Alone, on the other hand, the

cross-lagged effect significantly decreases in magnitude (βφ,12 in Table 2), although the effect is still negative (see φ12,t = −0.1 in Figure 8). Thus, when she is Alone, feelings of Down have less influence on feelings of Cheerful at the next time point.[6]

Regarding the innovation network structure, being Alone only has an effect on the innovation of the variable Down (see Table 2 βσ2,22 ), leading to increase of this innovation. This indicates that either (1) there are more unobserved factors that are not being taken into account when predicting feeling Down and/or (2) the emotional reactivity to these unobserved factors is stronger when she is Alone.

Figure 8

A psychological network illustrating the changes in temporal dynamics and average levels of Cheerful and Down when participant 1 is alone. To create this figure, we used Table 2 and Equation 27 to depict the time-varying effects conditional on the moderator Alone (also see Figure 3). We only show significant effects. From Table 2, it is evident that there is no significant effect from Cheerful to Down, as both \(\phi_{21}\) and \(\beta_{\phi,21}\) have confidence intervals that encompass zero. Furthermore, the changes in autoregressive effects, denoted as \(\beta_{\phi,11}\) and \(\beta_{\phi,22}\), have confidence intervals including zero, meaning that these effects remain constant over time. It is important to note that even though the time-varying mean of Down is specified as \(\mu_{t,2}=0.03\), the variables are standardized, and the scale ranges from -3 to 3. In this context, a value of 0 indicates a higher level than -.23. Therefore, when the participant is alone, she tends to experience higher levels of Down.

Psychological network diagram depicting changes in temporal dynamics and average levels of variables Cheerful and Down when alone, based on idiographic models.

Results: RQ2

For our second research question, we turn to the continuous moderator Sleep quality to determine whether it had a moderating effect on the emotion dynamics networks of Participant 2. Participant 2 was measured for 123 days, resulting in a total of 615 time points, with 14% missing data. After making the data equidistant by including missing values, the time series consists of 981 time points. Figure 9 displays the time series of Participant 2, focusing on the impact of Sleep quality on Cheerfulness. When viewing the time series in Figure 9, there is no obvious pattern between sleeping well and feeling more cheerful. In other words, the colour of the dots does not seem to be related to their position on the y-axis (i.e., Cheerful). For instance, both deep purple and blue-colored time points (the latter indicating when the participant has slept well) are observed when she is Cheerful (with values of zero and above) and when she is not Cheerful (with negative values for cheerfulness). Indeed, Pearson’s correlation coefficient indicates only a minor significant effect of 0.15, with a 95% confidence interval of (0.06,0.23).

Figure 9

The time series of Participant 2 for the variable Cheerful together with the moderator Sleep Quality. The gradient color from red to blue indicates the participant’s sleep quality at different time points, with 2 (blue) representing high sleep quality and -2 (red) representing low sleep quality.

Time series visualization of the impact of sleep quality on participant 2’s mood variable Cheerful, colored gradient represents sleep quality.

For this model, it becomes evident that the difference in AIC and BIC is negligible (see Table 3). Given that the difference between the mean and intercept models is so small, we assume that it is equally likely for the moderator, Sleep quality, to affect the mean at one point in time as it is for the effect of Sleep quality on the mean to influence the temporal dynamics and, consequently, impact the rest of the process over time. For simplicity, we focus on the model in which the mean is estimated directly.

Table 3

AIC and BIC for the moderator Sleep quality of participant 2. Sleep (mean) refers to a model where the mean is directly estimated using the measurement equation, Sleep (intercept) refers to a model where the intercept is estimated directly.

AIC and BIC values for moderation analyses concerning participant 2's psychological state affected by sleep quality, comparing two statistical approaches.

The results in Table 4 for this model indicate that Sleep quality does not lead to a change in the temporal dynamic network structure (i.e., βφ,11, βφ,12, βφ,21, and βφ,22; all encompassing zero in their confidence intervals, see Table 4). Thus, all autoregressive and cross-lagged effects are not influenced by sleep for this participant. Sleep quality does change the mean levels of Cheerfulness and Down. On average, a one-unit increase (of one standard deviation) in Sleep Quality is associated with a small increase of 0.12 in Cheerfulness and a small decrease of 0.15 in feeling Down at the same time point (see Table 4). In contrast to Participant 1, the entire innovation network of Participant 2 is influenced by Sleep quality (see βσ2,11, βσ2,21 and βσ2,22 in Table 4). In Figure 10, it can be seen that the innovation variance becomes less pronounced when Sleep quality improves. Thus, when predicting Cheerful and Down, there is less noise when Participant 2 has slept well, possibly due to fewer unobserved factors and/or reduced emotional reactivity to these unobserved factors. Furthermore, the covariance (and correlation) of the innovations also becomes less pronounced (closer to zero) when her Sleep quality improves (for the exact correlation values, see section Calculating the correlations instead of covariances in the file RQ2.html).[7] The covariance of the innovations reflects unobserved factors, such as bad weather, that influence both feelings of Cheerful and Down. This can also be interpreted as there being less unobserved factors and/or the effect of unobserved factors decreasing when the participant sleeps better.

Figure 10

The effect of Sleep quality on the covariance structure of the innovation network. The circles represent the variances of the innovation and the edge represents the covariance. The moderator is represented as a blue diamond labeled Sleep. As Sleep quality is a continuous moderator, we show the networks for different values (in this case, standard deviations) of the moderator, with 0 representing no effect of the moderator. Furthermore, we indicate that these are not the only possible values by adding ‘…’ between the networks.

Network diagram showing the effect of varying sleep quality on the covariance structure of the innovation network, illustrating continuous moderation.

Table 4

Results of the fixed moderated time series analysis for participant 2, with Sleep quality as the moderator.

Results of the fixed moderated time series analysis for participant 2, with sleep quality as the moderator, detailing the effects on psychological network dynamics.

Discussion

In this paper, we have illustrated moderated idiographic networks using intensive longitudinal data from two patients with current depression. We showed that in these two patients, different parts of the networks change due to the moderators being Alone and Sleep quality, where the networks consist of Cheerfulness and Down. We observed, for example, in participant one, that being Alone was associated with feeling more down and less cheerful than when she was in company. The relationship in the temporal dynamics network between feeling Down and feeling Cheerfulness at the next time point also decreased, meaning that the connection between the two became weaker when she was Alone compared to being in company. For the second participant, although only a small effect,

higher Sleep quality for the participant was associated with increased Cheerfulness and decreased feelings of Down. Furthermore, Sleep quality also was associated with changes in the innovation network: factors not taken into account in the model, such as unobserved variables like bad weather, influenced feelings of Cheerfulness and Down less when she had slept well.

An important novel feature of the moderated idiographic network model approach is that it allows us to integrate context and other clinically relevant variables, which are often overlooked in existing clinical networks (Bak et al., 2016; David et al., 2018; Frumkin et al., 2020; Levinson et al., 2021; Reeves & Fisher, 2020), into the network, thereby enhancing its clinical utility. Currently, idiographic networks are data-driven, in which, for ease of analysis, often items with the same response and time scale are only included (Bastiaansen et al., 2020). In contrast, the analytic approach introduced here enables a hypothesis-driven form of idiographic network analysis, in which context variables, such as being in company of others or not, could be a part of the network (Bringmann, in press; Os et al., 2017). We furthermore showed how items that had a different time scale, such as sleep quality, could be incorporated into a network, using an explicit hypothesis about how sleep would affect mood over the day.

Taking this hypothesis-driven approach can encourage the development of even more person-specific items tailored to the patient (Klipstein et al., 2023). For example, instead of gathering information about being in the company of colleagues or friends, details about specific persons could be collected to understand which individuals influence a patient’s mood, such as by helping regulate negative emotions (Stadel et al., 2023). Therefore, rather than solely focusing on emotions and symptoms, idiographic networks could enhance their clinical utility by incorporating personalized contextual items (Bringmann, 2021).

Furthermore, one of the advantages of using fixed moderated time series analysis is that it uses the state space framework, enabling effective handling of missing values, which are ubiquitous in ILD research (Rintala et al., 2019; Silvia et al., 2013). Additionally, the mean can be directly estimated, facilitating the straightforward inclusion of the mean level in networks. Considering that changes in the mean levels of variables, such as symptoms and mood, are almost always clinically relevant, we hope this encourages other researchers to also include mean levels in their network visualizations.[8]

We also discussed how the moderator can affect the mean of the process differently depending on whether the mean is estimated directly or indirectly via the intercepts (Ernst et al., 2023). The differentiation between these two effects is of substantial interest: does a moderator like stress persist in influencing the patient’s emotions, or does it impact the mean level of the patient’s emotions only at one time point? However, testing this with standard model selection proves difficult, given that the model differences are very subtle. In other words, the largest part of the model (variance and covariance of the innovation and the autoregressive and cross-lagged effects) remains the same. Thus, further research is needed to explore how well one can differentiate and perform model selection between these two different effects of the moderator on the mean level of the process of interest.

While the fixed moderated time series model, in contrast to other currently used VAR based network models, makes it possible to moderate all parameters of a VAR model, this flexibility also limits the scalability of the approach. There are 3 important sources of model dimensionality in the fixed moderated time series model. First, as in standard VAR approaches, increasing the lag order, p of the model will increase the number of free parameters by dim(Φ)∗p. For the fixed moderated VAR model, however, each lagged effect can also be made context dependent, further increasing the model dimensionality. Second, the number of process variables can be increased. Already in a VAR model increasing the number of process variables increases the number of free parameters dramatically (e.g., Loossens et al., 2021; Revol et al., in press). This is again exacerbated in the fixed moderated model, as each of these free parameters can in principle be made subject to moderation. Third, the number of moderators can be increased. With each included moderator, one could allow the estimation of that moderators’ effects on all the process parameters.

In our application we fit the fixed moderated model on two process variables and one moderator. This model already has 18 parameters to estimate, and even this simplest case of a moderated idiographic network (using only lag-1), led to convergence issues. For example, when estimating the effect of Sleep quality on a reduced data set, convergence issues were encountered. Additionally, when attempting to estimate the influence of more than two moderator variables, the model also failed to converge.

The more time points a researcher has available, the less prone to convergence issues the model will typically be (keeping the overall model complexity constant). However, obtaining the approximately 500 time points in the current study is already burdensome for participants. making it unlikely that many more time points can be collected. Furthermore, other factors less within the power of the researchers also possibly influence convergence. For instance, Schuurman et al., 2015 pointed out that in the autoregressive model with measurement error, a nonzero autocorrelation is necessary for identification, with higher autocorrelation making empirical identification easier and thus reducing the likelihood of convergence problems (see also, Adolf & Ceulemans, 2023). Similar issues could play a role in the fixed moderated time series model, making it important that such factors, besides model dimensionality and the needed number of time points, are further studied with simulations.

It is clear from the above that careful thought should be placed on how to keep the dimensionality of the model as small as possible, while still being able to investigate meaningful relationships between context and the dynamics of affect variables. One option is to assess contextual dependence in only a subset of the fixed moderated VAR parameters and for specific processes of interest. For instance, a researcher might want to assess how the cross-lagged relationship from Cheerfulness to Down depends on social context, and might therefore only allow elements of the Φ matrix to be moderated. While one risks not capturing the full dynamical profile of change, it will be far easier to estimate those effects which are of prime interest to the researcher. Another option for reducing dimensionality involves utilizing data-driven techniques, such as regularization methods (e.g., Epskamp et al., 2018b), which can be used to shrink the model’s dimensionality and thus mitigate convergence issues.

An additional limitation of the fixed moderated time series analysis is that there are more preprocessing steps compared to a linear regression approach, particularly involving the imputation of the moderator and the provision of initial values. While we have shown how this can be done, it requires careful consideration and familiarity with the model and data. In our empirical example, for instance, we found that the results were sensitive to differences in imputation. Therefore, we recommend relying on theory and qualitative information when selecting an appropriate imputation method. For instance, inquiring with the patient who provided the data about whether she was more likely to be alone or not when data was missing would be crucial. This implies that the model cannot be readily employed in clinical practice, especially when also considering the convergence issues that are more severe than in other contemporary network models based on ordinary least squares, kernels or generalized additive modeling.

Using a fixed moderated time series model also presents more general challenges that are not specifically tied to this model. One such challenge, as mentioned earlier, is measurement error. Due to the convergence issues already present, we did not explicitly model the measurement error, although it is technically simple to do so in a state space framework: one can model measurement error separately from the innovations. Failing to do this for the dependent variable(s) implies that the measurement error becomes part of the innovation. Research has shown that this can lead to severely distorted autoregressive and cross-lagged estimations (Schuurman & Hamaker, 2019; Schuurman et al., 2015). Additionally, it is possible that the moderators themselves are affected by measurement error (Adolf et al., 2017). In the fixed moderated time series model, we assume that the moderator is measured without error. This assumption implies that any measurement error in the moderator may introduce bias to the model estimations. Recognizing the probable presence of noise in the measurement of moderators, is important to study how measurement error behaves, for instance, whether it is continuous or only occasional in the process. Extending the model to explicitly account for measurement error in the moderator could be another potential solution (see for more information Ernst et al., 2024).

Another more general challenge pertains to what we include as moderators. In this paper, we did not explicitly distinguish between exogenous and endogenous variables (Arizmendi et al., 2021; Bringmann et al., 2022). While the weather is a typical example of an exogenous variable – we cannot influence the weather, but the weather can influence our mood – the process of sleeping well is more likely to be reciprocal and may be considered as an endogenous variable (Konjarski et al., 2018). In this case, perhaps we should not only consider sleep as a moderator of the emotion process but also explore the effect of emotions as a moderator of sleep quality. This topic requires further research, encompassing both theory and how this can be best implemented in the fixed moderated time series model.

In sum, this paper illustrates a new way of analyzing change in idiographic psychological networks using a more theory-driven approach, which not only looks at which parameters in a network change over time, but also offers tools to identify (context) factors associated with the change. We have shown that the use of fixed moderated time series analysis brought advantages in handling missing values and allowed for the direct estimation of mean levels, enhancing the clinical relevance of network visualizations. However, we have also emphasized several challenges, such as convergence issues and dealing with measurement error, urging the need for further research and exploration of alternative modeling techniques. We hope that the use of moderators in idiographic network models will help to give due importance to factors such as change, context, and hypothesis-driven investigation, and thereby play a role in increasing the clinical utility of network models in future research.

Acknowledgements

We are very grateful for Janne Adolf for thinking along on how to best execute the fixed moderated time series model, for helping with technical issues regarding modeling, and for providing Figure 5. We also thank Noemi Schuurman for helping with the notation in the Appendix and with the interpretation of the direct and indirect estimation for the mean. We are also grateful to Yong Zhang for providing very useful feedback on parts of the manuscript and figure code, to Sebastian Castro Alvarez for providing the code to make the data equidistant, and finally to Markus Eronen for giving feedback on several drafts of the manuscript. We furthermore thank Marieke Helmich for collecting the empirical data. The research presented in this article was supported by a grant of the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovative programme (ERC-CoG-2015; No 681466) awarded to Marieke Wichers. The research of Eva Ceulemans was supported by research grants from the Fund for Scientific Research-Flanders (FWO; Project No. G0C9821N) and the Research Council of KU Leuven (C14/23/062; iBOF/21/090). The research of Sigert Ariens was supported by the Research Council of KU Leuven (PDMT2/23/018). The research of Laura Bringmann was supported by the Dutch Research Council Veni Grant (NWO-Veni 191G.037), and by the ‘Stress in Action’ project (www.stress-in-action.nl), which is financially supported by the Dutch Research Council and the Dutch Ministry of Education, Culture and Science (NWO gravitation grant
number 024.005.010).

Endnotes

[1] Note that there is a difference between changes and fluctuations (Bringmann et al., 2017). A standard VAR model cannot capture changes, but only fluctuations: For example, if a fluctuation of sleep in the positive direction happens after a positive fluctuation of mood, this is represented by a positive edge from sleep to mood.

[2] Also known as crossregressive effects.

[3] There are other methods available for estimating fixed moderated time series analyses, such as models using a Bayesian framework.

[4] We also attempted to include two or more moderators, using dummy coding and multiple categories from this list. However, with more than two moderators, the model did not converge anymore.

[5] Alternatively, we explored a different hypothesis, specifically, the idea that sleep exclusively impacts mood in the morning and not during the rest of the day. Normally this could be achieved by having missing values for sleep during the rest of the day, but this was not possible in the fixed moderated time series model as it does not allow missing values in the moderator variable. Therefore, we retained only the data from the first beep of the day, discarding all other data. This resulted in a reduction of the dataset by approximately 80%. This model did not converge, probably due to the large data reduction. Non-convergence was almost always due to inadequate starting values.

[6] As can be seen on the OSF page in the file RQ1.html (section Sensitivity check), the effect of the moderator on βφ,12 is no longer significant in the sensitivity analysis. In clinical practice, it would therefore be important to ask the participant whether they were likely to be alone or not alone when there are missing values.

[7] We calculated also the correlation to make sure that the changes in the covariance were not just due to changes in the variance.

[8] It is worth noting that, strictly speaking, within an ordinary least squares model, direct estimation of the mean is achievable through two-level equation estimation (for instance Ernst et al., 2020).

[9] We follow here the formulation and equations based on (Jongerling et al., 2015; Schuurman et al., 2015; Song & Ferrer, 2009).

[10] Note that also other lags can be incorporated into this term, such as a lag-2 effect. However, for simplicity, we only focus on the lag-1 case (see Song and Ferrer, 2009).

References

Adolf, J., & Ceulemans, E. (2023). Improved estimation of autoregressive models through contextual impulses and robust modeling. https://doi.org/10.31234/osf.io/3qgse

Adolf, J. K., Voelkle, M. C., Brose, A., & Schmiedek, F. (2017). Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis. Multivariate Behavioral Research, 52(4), 499–531. https://doi.org/10.1080/00273171.2017.1321978

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723.

Albers, C. J., & Bringmann, L. F. (2020). Inspecting Gradual and Abrupt Changes in Emotion Dynamics With the Time-Varying Change Point Autoregressive Model. European Journal of Psychological Assessment, 36(3), 492–499. https://doi.org/https://doi.org/10.1027/1015-5759/a000589

Ariens, S., Adolf, J. K., & Ceulemans, E. (2022). Collinearity Issues in Autoregressive Models with Time-Varying Serially Dependent Covariates. Multivariate Behavioral Research, 58(4), 687–705. https://doi.org/10.1080/00273171.2022.2095247

Ariens, S., Ceulemans, E., & Adolf, J. K. (2020). Time series analysis of intensive longitudinal data in psychosomatic research: A methodological overview. Journal of Psychosomatic Research, 137, 110191. https://doi.org/10.1016/j.jpsychores.2020.110191

Arizmendi, C., Gates, K., Fredrickson, B., & Wright, A. (2021). Specifying exogeneity and bilinear effects in data driven model searches. Behavior research methods, 53(3), 1276–1288. https://doi.org/10.3758/s13428-020-01469-2

Auger-Méthé, M., Newman, K., Cole, D., Empacher, F., Gryba, R., King, A. A., Leos-Barajas, V., Mills Flemming, J., Nielsen, A., Petris, G., & Thomas, L. (2021). A guide to statespace modeling of ecological time series. Ecological Monographs, 91(4), e01470. https://doi.org/10.1002/ecm.1470

Bak, M., Drukker, M., Hasmi, L., & Os, J. van. (2016). An n=1 Clinical Network Analysis of Symptoms and Treatment in Psychosis. PLoS ONE, 11(9), e0162811. https://doi.org/10.1371/journal.pone.0162811

Bartels, S. L., Van Zelst, C., Melo Moura, B., Daniëls, N. E., Simons, C. J., Marcelis, M., Bos, F. M., & Servaas, M. N. (2023). Feedback based on experience sampling data: Examples of current approaches and considerations for future research. Heliyon, 9(9), e20084. https://doi.org/10.1016/j.heliyon.2023.e20084

Bastiaansen, J. A., Kunkels, Y. K., Blaauw, F. J., Boker, S. M., Ceulemans, E., Chen, M., Chow, S.-M., Jonge, P. de, Emerencia, A. C., Epskamp, S., Fisher, A. J., Hamaker, E. L., Kuppens, P., Lutz, W., Meyer, M. J., Moulder, R., Oravecz, Z., Riese, H., Rubel, J., … Bringmann, L. F. (2020). Time to get personal? The impact of researchers choices on the selection of treatment targets using the experience sampling methodology. Journal of Psychosomatic Research, 137, 110211. https://doi.org/10.1016/j.jpsychores.2020.110211

Bastiaansen, J. A., Meurs, M., Stelwagen, R., Wunderink, L., Schoevers, R. A., Wichers, M., & Oldehinkel, A. J. (2018). Self-monitoring and personalized feedback based on the experiencing sampling method as a tool to boost depression treatment: A protocol of a pragmatic randomized controlled trial (ZELF-i). BMC Psychiatry, 18(1), 276. https://doi.org/10.1186/s12888-018-1847-z

Beltz, A. M., & Gates, K. M. (2017). Network Mapping with GIMME. Multivariate behavioral research, 52(6), 789–804. https://doi.org/10.1080/00273171.2017.1373014

Blanchard, M. A., & Heeren, A. (2022). Ongoing and Future Challenges of the Network Approach to Psychopathology: From Theoretical Conjectures to Clinical Translations. In Comprehensive Clinical Psychology – 2nd edition (pp. 32–46). Amsterdam: Elsevier. https://doi.org/10.1016/B978-0-12-818697-8.00044-3

Bolger, N., & Laurenceau, J.-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. New York, NY: Guilford Press.

Borsboom, D. (2017a). Mental disorders, network models, and dynamical systems. In Philosophical Issues in Psychiatry IV: Psychiatric Nosology DSM-5 (International Perspectives in Philosophy and Psychiatry) (pp. 80–97). Publisher: Oxford University Press New York.

Borsboom, D. (2017b). A network theory of mental disorders. World Psychiatry, 16(1), 5–13. https://doi.org/10.1002/wps.20375

Borsboom, D., & Cramer, A. O. (2013). Network Analysis: An Integrative Approach to the Structure of Psychopathology. Annual Review of Clinical Psychology, 9(1), 91–121. https://doi.org/10.1146/annurev-clinpsy-050212-185608

Borsboom, D., Deserno, M. K., Rhemtulla, M., Epskamp, S., Fried, E. I., McNally, R. J., Robinaugh, D. J., Perugini, M., Dalege, J., Costantini, G., Isvoranu, A.-M., Wysocki, A. C., Borkulo, C. D. van, Bork, R. van, & Waldorp, L. J. (2021). Network analysis of multivariate data in psychological science. Nature Reviews Methods Primers, 1, 58. https://doi.org/10.1038/s43586-021-00055-w

Bos, F. M., Blaauw, F. J., Snippe, E., Krieke, L. van der, Jonge, P. de, & Wichers, M. (2018). Exploring the emotional dynamics of subclinically depressed individuals with and without anhedonia: An experience sampling study. Journal of Affective Disorders, 228, 186–193. https://doi.org/10.1016/j.jad.2017.12.017

Bos, F. M., Snippe, E., Bruggeman, R., Doornbos, B., Wichers, M., & Krieke, L. van der. (2020). Recommendations for the use of long-term experience sampling in bipolar disorder care: A qualitative study of patient and clinician experiences. International Journal of Bipolar Disorders, 8(1), 38. https://doi.org/10.1186/s40345-020-00201-5

Brandt, P. T., & Williams, J. T. (2007). Multiple Time Series Models. Series: Quantitative Applications in the Social Sciences. Sage.

Bringmann, L. F., Vissers, N., Wichers, M., Geschwind, N., Kuppens, P., Peeters, F., Borsboom, D., & Tuerlinckx, F. (2013). A network approach to psychopathology: New insights into clinical longitudinal data. PLoS ONE, 8(4), e60188.

Bringmann, L. F. (2021). Person-specific networks in psychopathology: Past, present, and future. Current Opinion in Psychology, 41, 59–64. https://doi.org/10.1016/j.copsyc.2021.03.004

Bringmann, L. F. (in press). The future of dynamic networks in clinical practice. World Psychiatry.

Bringmann, L. F., Albers, C., Bockting, C., Borsboom, D., Ceulemans, E., Cramer, A., Epskamp, S., Eronen, M. I., Hamaker, E., Kuppens, P., Lutz, W., McNally, R. J., Molenaar, P., Tio, P., Voelkle, M. C., & Wichers, M. (2022). Psychopathological networks: Theory, methods and practice. Behaviour Research and Therapy, 149, 104011. https://doi.org/10.1016/j.brat.2021.104011

Bringmann, L. F., & Eronen, M. I. (2018). Don’t Blame the model: Reconsidering the network approach to psychopathology. Psychological Review, 125(4), 606–615. https://doi.org/10.1037/rev0000108

Bringmann, L. F., Ferrer, E., Hamaker, E. L., Borsboom, D., & Tuerlinckx, F. (2018). Modeling Nonstationary Emotion Dynamics in Dyads using a Time-Varying Vector-Autoregressive Model. Multivariate Behavioral Research, 53(3), 293–314. https://doi.org/10.1080/00273171.2018.1439722

Bringmann, L. F., Hamaker, E. L., Vigo, D. E., Aubert, A., Borsboom, D., & Tuerlinckx, F. (2017). Changing dynamics: Time-varying autoregressive models using generalized additive modeling. Psychological Methods, 22(3), 409–425. https://doi.org/10.1037/met0000085

Bringmann, L. F., Veen, D. C. van der, Wichers, M., Riese, H., & Stulp, G. (2021). ESMvis: A tool for visualizing individual Experience Sampling Method (ESM) data. Quality of Life Research, 30(11), 3179–3188. https://doi.org/10.1007/s11136-020-02701-4

Bulteel, K., Tuerlinckx, F., Brose, A., & Ceulemans, E. (2016). Using Raw VAR Regression Coefficients to Build Networks can be Misleading. Multivariate Behavioral Research, 51(2-3), 330–344. https://doi.org/10.1080/00273171.2016.1150151

Bulteel, K., Tuerlinckx, F., Brose, A., & Ceulemans, E. (2018). Improved Insight into and Prediction of Network Dynamics by Combining VAR and Dimension Reduction. Multivariate Behavioral Research, 53(6), 853–875. https://doi.org/10.1080/00273171.2018.1516540

Burger, J., Epskamp, S., Veen, D. C. van der, Dablander, F., Schoevers, R. A., Fried, E. I., & Riese, H. (2022a). A clinical PREMISE for personalized models: Toward a formal integration of case formulations and statistical networks. Journal of Psychopathology and Clinical Science, 131(8), 906–916. https://doi.org/10.1037/abn0000779

Burger, J., Ralph-Nearman, C., & Levinson, C. A. (2022b). Integrating clinician and patient case conceptualization with momentary assessment data to construct idiographic networks: Moving toward personalized treatment for eating disorders. Behaviour Research and Therapy, 159, 104221. https://doi.org/10.1016/j.brat.2022.104221

Buuren, S. v., & Groothuis-Oudshoorn, K. (2011). Mice: Multivariate Imputation by Chained Equations in R. Journal of Statistical Software, 45, 1–67. https://doi.org/10.18637/jss.v045.i03

Cabrieto, J., Adolf, J., Tuerlinckx, F., Kuppens, P., Ceulemans, E., & Cabrieto, J. (2019). An Objective, Comprehensive and Flexible Statistical Framework for Detecting Early Warning Signs of Mental Health Problems. Psychotherapy and psychosomatics, 88(3), 184–186. https://doi.org/10.1159/000494356

Castro-Alvarez, S., Tendeiro, J. N., Jonge, P. de, Meijer, R. R., & Bringmann, L. F. (2022). Mixed-Effects Trait-State-Occasion Model: Studying the Psychometric Properties and the PersonSituation Interactions of Psychological Dynamics. Structural Equation Modeling: A Multidisciplinary Journal, 29(3), 438–451. https://doi.org/10.1080/10705511.2021.1961587

Chatfield, C. (2003). The analysis of time series: An introduction. Boca Raton, FL: Chapman; Hall/CRC.

Chen, M., Chow, S.-M., Hammal, Z., Messinger, D. S., & Cohn, J. F. (2021). A Person and Time-Varying Vector Autoregressive Model to Capture Interactive Infant-Mother Head Movement Dynamics. Multivariate Behavioral Research, 56(5), 739–767. https://doi.org/10.1080/00273171.2020.1762065

Chen, Y. W., Bundy, A., Cordier, R., Chien, Y. L., & Einfeld, S. (2016). The Experience of Social Participation in Everyday Contexts Among Individuals with Autism Spectrum Disorders: An Experience Sampling Study. Journal of Autism and Developmental Disorders, 46(4), 1403–1414. https://doi.org/10.1007/s10803-015-2682-4

Chow, S.-M. (2019). Practical Tools and Guidelines for Exploring and Fitting Linear and Nonlinear Dynamical Systems Models. Multivariate Behavioral Research, 54(5), 690–718. https://doi.org/10.1080/00273171.2019.1566050

Chow, S.-M., Ho, M.-h. R., Hamaker, E. L., & Dolan, C. V. (2010). Equivalence and Differences Between Structural Equation Modeling and State-Space Modeling Techniques. Structural Equation Modeling: A Multidisciplinary Journal, 17(2), 303–332. https://doi.org/10.1080/10705511003661553

Chua, A. S., & Tripodis, Y. (2022). A state-space approach for longitudinal outcomes: An application to neuropsychological outcomes. Statistical Methods in Medical Research, 31(3), 520–533. https://doi.org/10.1177/09622802211055858

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3d ed.) Lawrence Erlbaum Associates.

Cramer, A. O. J., & Borsboom, D. (2015). Problems attract problems: A network perspective on mental disorders. In R. A. Scott & S. M. Kosslyn (Eds.), Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource (pp. 1–15). John Wiley & Sons.

Cramer, A. O., Waldorp, L. J., Van Der Maas, H. L., & Borsboom, D. (2010). Comorbidity: A network perspective. Behavioral and Brain Sciences, 33(2-3), 137–150. https://doi.org/10.1017/S0140525X09991567

David, S. J., Marshall, A. J., Evanovich, E. K., & Mumma, G. H. (2018). Intraindividual Dynamic Network Analysis Implications for Clinical Assessment. Journal of Psychopathology and Behavioral Assessment, 40(2), 235–248. https://doi.org/10.1007/s10862-017-9632-8

Dawson, J. F. (2014). Moderation in Management Research: What, Why, When, and How. Journal of Business and Psychology, 29(1), 1–19. https://doi.org/10.1007/s10869-013-9308-7

De Haan-Rietdijk, S., Gottman, J. M., Bergeman, C. S., & Hamaker, E. L. (2016). Get Over It! A Multilevel Threshold Autoregressive Model for State-Dependent Affect Regulation. Psychometrika, 81(1). https://doi.org/10.1007/s11336-014-9417-x

Dejonckheere, E., Houben, M., Schat, E., Ceulemans, E., & Kuppens, P. (2021). The short-term psychological impact of the COVID-19 pandemic in psychiatric patients: Evidence for differential emotion and symptom trajectories in Belgium. Psychologica Belgica, 61(1), 163–172. https://doi.org/10.5334/pb.1028

Delespaul, P. a. E. G., & Devries, M. W. (1987). The Daily Life of Ambulatory Chronic Mental Patients. The Journal of Nervous and Mental Disease, 175(9), 537. https://doi.org/10.1097/00005053-198709000-00005

Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods (2nd ed). Oxford University Press.

Epskamp, S., Borkulo, C. D. van, Veen, D. C. van der, Servaas, M. N., Isvoranu, A. M., Riese, H., & Cramer, A. O. (2018a). Personalized Network Modeling in Psychopathology: The Importance of Contemporaneous and Temporal Connections. Clinical Psychological Science, 6(3), 416–427. https://doi.org/10.1177/2167702617744325

Epskamp, S., Cramer, A. O. J., Waldorp, L. J., Schmittmann, V. D., & Borsboom, D. (2012). Qgraph: Network visualizations of relationships in psychometric data. Journal of Statistical Software, 48(4), 1–18. https://doi.org/10.18637/jss.v048.i04

Epskamp, S., Waldorp, L. J., Mõttus, R., & Borsboom, D. (2018b). The Gaussian Graphical Model in Cross-Sectional and Time-Series Data. Multivariate Behavioral Research, 53(4), 453–480. https://doi.org/10.1080/00273171.2018.1454823

Ernst, A. F., Albers, C. J., Jeronimus, B. F., & Timmerman, M. E. (2020). Inter-Individual Differences in Multivariate Time-Series: Latent Class Vector-Autoregressive Modeling. European Journal of Psychological Assessment, 36(3), 482–491. https://doi.org/10.1027/1015-5759/a000578

Ernst, A. F., Albers, C. J., & Timmerman, M. E. (2023). A comprehensive model framework for between-individual differences in longitudinal data. Psychological Methods. https://doi.org/10.1037/met0000585

Ernst, A. F., Ceulemans, E., Bringmann, L. F., & Adolf, J. K. (2024). Evaluating contextual models for intensive longitudinal data in the presence of noise. https://osf.io/yxg72

Fischer, A. H., & Van Kleef, G. A. (2010). Where have all the people gone? A plea for including social interaction in emotion research. Emotion Review, 2(3), 208–211. https://doi.org/10.1177/1754073910361980

Fried, E. I. (2020). Lack of Theory Building and Testing Impedes Progress in The Factor and Network Literature. Psychological Inquiry, 31(4), 271–288. https://doi.org/10.1080/1047840X.2020.1853461

Frumkin, M. R., Piccirillo, M. L., Beck, E. D., Grossman, J. T., & Rodebaugh, T. L. (2020). Feasibility and utility of idiographic models in the clinic: A pilot study. Psychotherapy Research, 31(4), 520–534. https://doi.org/10.1080/10503307.2020.1805133

Groen, R. N., Ryan, O., Wigman, J. T. W., Riese, H., Penninx, B. W. J. H., Giltay, E. J., Wichers, M., & Hartman, C. A. (2020). Comorbidity between depression and anxiety: Assessing the role of bridge mental states in dynamic psychological networks. BMC Medicine, 18(1), 308. https://doi.org/10.1186/s12916-020-01738-z

Grund, S., Robitzsch, A., & Luedtke, O. (2023). Mitml: Tools for Multiple Imputation in Multilevel Modeling. https://CRAN.R-project.org/package=mitml

Gu, F., Preacher, K. J., & Ferrer, E. (2014). A State Space Modeling Approach to Mediation Analysis. Journal of Educational and Behavioral Statistics, 39(2), 117–143. https://doi.org/10.3102/1076998614524823

Haan-Rietdijk, S. de, Voelkle, M. C., Keijsers, L., & Hamaker, E. L. (2017). Discretevs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data. Frontiers in Psychology, 8. https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01849

Hamaker, E. L., Asparouhov, T., Brose, A., Schmiedek, F., & Muthén, B. (2018). At the Frontiers of Modeling Intensive Longitudinal Data: Dynamic Structural Equation Models for the Affective Measurements from the COGITO Study. Multivariate Behavioral Research, 53(6), 820–841. https://doi.org/10.1080/00273171.2018.1446819

Hamaker, E. L., & Grasman, R. P. P. P. (2012). Regime Switching State-Space Models Applied to Psychological Processes: Handling Missing Data and Making Inferences. Psychometrika, 77(2), 400–422. https://doi.org/10.1007/s11336-012-9254-8

Hamaker, E. L., & Dolan, C. V. (2009). Idiographic Data Analysis: Quantitative Methods – From Simple to Advanced. In J. Valsiner, P. C. M. Molenaar, M. Lyra, & N. Chaudhary (Eds.), Dynamic Process Methodology in the Social and Developmental Sciences (pp. 191–216). New York, NY: Springer-Verlag.

Hamilton, J. D. (1994). Time series analysis. Princeton, NJ: Princeton university press.

Harvey, A. C. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press.

Haslbeck, J., Ryan, O., Robinaugh, D., Waldorp, L., & Borsboom, D. (2021a). Modeling Psychopathology: From Data Models to Formal Theories (tech. rep.). https://doi.org/10.31234/osf.io/jgm7f

Haslbeck, J. M. B. (2022). Estimating group differences in network models using moderation analysis. Behavior Research Methods, 54(1), 522–540. https://doi.org/10.3758/s13428-021-01637-y

Haslbeck, J. M. B., Borsboom, D., & Waldorp, L. J. (2021b). Moderated Network Models. Multivariate Behavioral Research, 56(2), 256–287. https://doi.org/10.1080/00273171.2019.1677207

Haslbeck, J. M. B., Bringmann, L. F., & Waldorp, L. J. (2020). A Tutorial on Estimating Time-Varying Vector Autoregressive Models. Multivariate Behavioral Research, 56(1) 120–149. https://doi.org/10.1080/00273171.2020.1743630

Haslbeck, J. M., Ryan, O., Maas, H. L. van der, & Waldorp, L. J. (2022). Modeling Change in Networks. In Network Psychometrics with R (pp. 193–209). Routledge.

Hawkley, L. C., & Cacioppo, J. T. (2010). Loneliness Matters: A Theoretical and Empirical Review of Consequences and Mechanisms. Annals of Behavioral Medicine, 40(2), 218–227. https://doi.org/10.1007/s12160-010-9210-8

Hayes, A. M., Laurenceau, J.-P., Feldman, G., Strauss, J. L., & Cardaciotto, L. (2007). Change is not always linear: The study of nonlinear and discontinuous patterns of change in psychotherapy. Clinical Psychology Review, 27(6), 715–723. https://doi.org/10.1016/j.cpr.2007.01.008

Hayes, A. F., & Montoya, A. K. (2017). A Tutorial on Testing, Visualizing, and Probing an Interaction Involving a Multicategorical Variable in Linear Regression Analysis. Communication Methods and Measures, 11(1), 1–30. https://doi.org/10.1080/19312458.2016.1271116

Helmich, M. A., Smit, A. C., Bringmann, L. F., Schreuder, M. J., Oldehinkel, A. J., Wichers, M., & Snippe, E. (2023). Detecting Impending Symptom Transitions Using Early-Warning Signals in Individuals Receiving Treatment for Depression. Clinical Psychological Science, 11(6), 994–1010. https://doi.org/10.1177/21677026221137006

Jongerling, J., Laurenceau, J.-P., & Hamaker, E. L. (2015). A Multilevel AR(1) Model: Allowing for Inter-Individual Differences in Trait-Scores, Inertia, and Innovation Variance. Multivariate Behavioral Research, 50(3), 334–349. https://doi.org/10.1080/00273171.2014.1003772

Kalman, R. E. (1960). A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82(1), 35–45. https://doi.org/10.1115/1.3662552

Ketkar, N. (2017). Deep Learning with Python. Apress. https://doi.org/10.1007/978-1-4842-2766-4

Klipstein, L. von, Riese, H., Veen, D. C. van der, Servaas, M. N., & Schoevers, R. A. (2020). Using person-specific networks in psychotherapy: Challenges, limitations, and how we could use them anyway. BMC Medicine, 18(1), 1–8. https://doi.org/10.1186/s12916-020-01818-0

Klipstein, L. von, Servaas, M. N., Schoevers, R. A., Veen, D. C. van der, & Riese, H. (2023). Integrating personalized experience sampling in psychotherapy: A case illustration of the Therap-i module. Heliyon, 9(3), e14507. https://doi.org/10.1016/j.heliyon.2023.e14507

Konjarski, M., Murray, G., Lee, V. V., & Jackson, M. L. (2018). Reciprocal relationships between daily sleep and mood: A systematic review of naturalistic prospective studies. Sleep Medicine Reviews, 42, 47–58. https://doi.org/10.1016/j.smrv.2018.05.005

Koval, P., Burnett, P. T., & Zheng, Y. (2021). Emotional inertia: On the conservation of emotional momentum. In C. E. Waugh & P. Kuppens (eds.) Affect dynamics (pp. 63–94). Springer Nature Switzerland AG.

Kramer, I., Simons, C. J., Hartmann, J. A., Menne-Lothmann, C., Viechtbauer, W., Peeters, F., Schruers, K., Van Bemmel, A. L., Inez, M., Delespaul, P., Van Os, J., & Wichers, M. (2014). A therapeutic application of the experience sampling method in the treatment of depression: A randomized controlled trial. World Psychiatry, 13(1), 68–77. https://doi.org/10.1002/wps.20090

Kuppens, P., Allen, N. B., & Sheeber, L. B. (2010). Emotional inertia and psychological maladjustment. Psychological Science. 21(7), 984-991. https://doi.org/10.1177/0956797610372634

Kuppens, P., Dejonckheere, E., Kalokerinos, E. K., & Koval, P. (2022). Some Recommendations on the Use of Daily Life Methods in Affective Science. Affective Science, 3(2), 505–515. https://doi.org/10.1007/s42761-022-00101-0

Levinson, C. A., Hunt, R. A., Keshishian, A. C., Brown, M. L., Vanzhula, I., Christian, C., Brosof, L. C., & Williams, B. M. (2021). Using individual networks to identify treatment targets for eating disorder treatment: A proof-of-concept study and initial data. Journal of Eating Disorders, 9(1), 147. https://doi.org/10.1186/s40337-021-00504-7

Li, Y., Wood, J., Ji, L., Chow, S.-M., & Oravecz, Z. (2022). Fitting Multilevel Vector Autoregressive Models in Stan, JAGS, and Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 29(3), 452–475. https://doi.org/10.1080/10705511.2021.1911657

Loossens, T., Dejonckheere, E., Tuerlinckx, F., & Verdonck, S. (2021). Informing VAR(1) with qualitative dynamical features improves predictive accuracy. Psychological Methods, 26(6), 635–659. https://doi.org/10.1037/met0000401

Lütkepohl, H. (2007). New introduction to multiple time series analysis. New York: Springer.

Lutz, W., Schwartz, B., Hofmann, S. G., Fisher, A. J., Husen, K., & Rubel, J. A. (2018). Using network analysis for the prediction of treatment dropout in patients with mood and anxiety disorders: A methodological proof-of-concept study. Scientific Reports, 8(1), 1–9. https://doi.org/10.1038/s41598-018-25953-0

Mansueto, A. C., Wiers, R. W., Weert, J. C. M. van, Schouten, B. C., & Epskamp, S. (2022). Investigating the feasibility of idiographic network models. Psychological Methods. https://doi.org/10.1037/MET0000466

McNeish, D., & Hamaker, E. L. (2019). A Primer on Two-Level Dynamic Structural Equation Models for. Psychological Methods, 25(5), 610–635. https://doi.org/10.1037/met0000250

Mehl, M. R., & Conner, T. S. (Eds.). (2012). Handbook of research methods for studying daily life. New York, NY: Guilford Press.

Molenaar, P. C. M., Beltz, A. M., Gates, K. M., & Wilson, S. J. (2016). State Space Modeling of Time-Varying Contemporaneous and Lagged Relations in Connectivity Maps. NeuroImage, 125, 791–802. https://doi.org/10.1016/j.neuroimage.2015.10.088

Myin-Germeys, I., & Kuppens, P. (Eds.). (2022). The open handbook of experience sampling methodology: A step-by-step guide to designing, conducting, and analyzing ESM studies (2nd). Leuven: Center for Research on Experience Sampling and Ambulatory Methods Leuven.

Myin-Germeys, I., Oorschot, M., Collip, D., Lataster, J., Delespaul, P., & Van Os, J. (2009). Experience sampling research in psychopathology: Opening the black box of daily life. Psychological medicine, 39(9), 1533–1547. https://doi.org/10.1017/s0033291708004947

Neale, M. C., Hunter, M. D., Pritikin, J. N., Zahery, M., Brick, T. R., Kirkpatrick, R. M., Estabrook, R., Bates, T. C., Maes, H. H., & Boker, S. M. (2016). OpenMx 2.0: Extended Structural Equation and Statistical Modeling. Psychometrika, 81(2), 535–549. https://doi.org/10.1007/s11336-014-9435-8

Oravecz, Z., Tuerlinckx, F., & Vandekerckhove, J. (2011). A hierarchical latent stochastic differential equation model for affective dynamics. Psychological Methods, 16(4), 468–490. https://doi.org/https://doi.org/10.1037/a0024375

Os, J. van, Verhagen, S., Marsman, A., Peeters, F., Bak, M., Marcelis, M., Drukker, M., Reininghaus, U., Jacobs, N., Lataster, T., Simons, C., Lousberg, R., Gülöksüz, S., Leue, C., Groot, P. C., Viechtbauer, W., & Delespaul, P. (2017).The experience sampling method as an mHealth tool to support self-monitoring, self-insight, and personalized health care in clinical practice. Depression and Anxiety, 34(6), 481–493. https://doi.org/10.1002/DA.22647

Palmier-Claus, J. E., Myin-Germeys, I., Barkus, E., Bentley, L., Udachina, A., Delespaul, P. A., Lewis, S. W., & Dunn, G. (2011). Experience sampling research in individuals with mental illness: Reflections and guidance. Acta Psychiatrica Scandinavica, 123(1), 12–20. https://doi.org/10.1111/j.1600-0447.2010.01596.x

Park, C., Majeed, A., Gill, H., Tamura, J., Ho, R. C., Mansur, R. B., Nasri, F., Lee, Y., Rosenblat, J. D., Wong, E., & McIntyre, R. S. (2020). The Effect of Loneliness on Distinct Health Outcomes: A Comprehensive Review and Meta-Analysis. Psychiatry Research, 294, 113514. https://doi.org/10.1016/j.psychres.2020.113514

Pek, J., & Wu, H. (2015). Profile Likelihood-Based Confidence Intervals and Regions for Structural Equation Models. Psychometrika, 80(4), 1123–1145. https://doi.org/10.1007/s11336-015-9461-1

Piccirillo, M. L., & Rodebaugh, T. L. (2019). Foundations of idiographic methods in psychology and applications for psychotherapy. Clinical Psychology Review, 71, 90–100. https://doi.org/10.1016/j.cpr.2019.01.002

Piot, M., Mestdagh, M., Riese, H., Weermeijer, J., Brouwer, J. M., Kuppens, P., Dejonckheere, E., & Bos, F. M. (2022). Practitioner and researcher perspectives on the utility of ecological momentary assessment in mental health care: A survey study. Internet Interventions, 30, 100575. https://doi.org/10.1016/j.invent.2022.100575

R Core Team. (2023). R: A Language and Environment for Statistical Computing. https://www.R-project.org/

Reeves, J. W., & Fisher, A. J. (2020). An Examination of Idiographic Networks of Posttraumatic Stress Disorder Symptoms. Journal of Traumatic Stress, 33(1), 84–95. https://doi.org/10.1002/jts.22491

Revol, J., Lafit, G., & Ceulemans, E. (in press). A new sample size planning approach for the VAR(1) model: Predictive Accuracy Analysis. Behavior Research Methods https://doi.org/10.31234/osf.io/2geh4

Rhudy, M. B., Salguero, R. A., & Holappa, K. (2017). A Kalman Filtering Tutorial for Undergraduate Students. International Journal of Computer Science & Engineering Survey, 08(01), 01–18. https://doi.org/10.5121/ijcses.2017.8101

Rintala, A., Wampers, M., Myin-Germeys, I., & Viechtbauer, W. (2019). Response compliance and predictors thereof in studies using the experience sampling method. Psychological Assessment, 31(2), 226–235. https://doi.org/10.1037/pas0000662

Robinaugh, D. J., Hoekstra, R. H., Toner, E. R., & Borsboom, D. (2020). The network approach to psychopathology: A review of the literature 2008-2018 and an agenda for future research. Psychological Medicine, 50(3), 353–366. https://doi.org/10.1017/S0033291719003404

Schuurman, N. K., & Hamaker, E. L. (2019). Measurement error and person-specific reliability in multilevel autoregressive modeling. Psychological Methods, 24(1), 70–91. https://doi.org/10.1037/met0000188

Schuurman, N. K., Houtveen, J. H., & Hamaker, E. L. (2015). Incorporating measurement error in n=1 psychological autoregressive modeling. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01038

Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464.

Shumway, R. H., & Stoffer, D. S. (2017). Time Series Analysis and Its Applications: With R Examples. https://doi.org/10.1007/978-3-319-52452-8

Silvia, P. J., Kwapil, T. R., Eddington, K. M., & Brown, L. H. (2013). Missed Beeps and Missing Data: Dispositional and Situational Predictors of Nonresponse in Experience Sampling Research. Social Science Computer Review, 31(4), 471–481. https://doi.org/10.1177/0894439313479902

Smit, A. C., Snippe, E., Bringmann, L. F., Hoenders, H. J. R., & Wichers, M. (2023). Transitions in depression: If, how, and when depressive symptoms return during and after discontinuing antidepressants. Quality of Life Research, 32(5), 1295–1306. https://doi.org/10.1007/s11136-022-03301-0

Snippe, E., Elmer, T., Ceulemans, E., Smit, A. C., Lutz, W., & Helmich, M. A. (2024). The Temporal Order of Emotional, Cognitive, and Behavioral Gains in Daily Life during Treatment of Depression. Journal of Consulting and Clinical Psychology.

Snippe, E., Viechtbauer, W., Geschwind, N., Klippel, A., Jonge, P. de, & Wichers, M. (2017). The Impact of Treatments for Depression on the Dynamic Network Structure of Mental States: Two Randomized Controlled Trials. Scientific Reports, 7(1), 46523. https://doi.org/10.1038/srep46523

Song, H., & Ferrer, E. (2009). State-Space Modeling of Dynamic Psychological Processes via the Kalman Smoother Algorithm: Rationale, Finite Sample Properties, and Applications. Structural Equation Modeling: A Multidisciplinary Journal, 16(2), 338–363. https://doi.org/10.1080/10705510902751432

Stadel, M., Stulp, G., Langener, A. M., Elmer, T., Duijn, M. A. J. van, & Bringmann, L. F. (2023). Feedback About a Persons Social Context – Personal Networks and Daily Social Interactions. Administration and Policy in Mental Health and Mental Health Services Research. https://doi.org/10.1007/s10488-023-01293-8

Stadnitski, T., & Wild, B. (2019). How to Deal With Temporal Relationships Between Biopsychosocial Variables: A Practical Guide to Time Series Analysis. Psychosomatic Medicine, 81(3), 289–304.https://doi.org/10.1097/PSY.0000000000000680

Stochl, J., Soneson, E., Wagner, A. P., Khandaker, G. M., Goodyer, I., & Jones, P. B. (2019). Identifying key targets for interventions to improve psychological wellbeing: Replicable results from four UK cohorts. Psychological Medicine, 49(14), 2389–2396. https://doi.org/10.1017/S0033291718003288

Suls, J., Green, P., & Hillis, S. (1998). Emotional reactivity to everyday problems, affective inertia, and neuroticism. Personality and Social Psychology Bulletin, 24(2), 127–136. https://doi.org/10.1177/0146167298242002

Swanson, J. T. (2023). Modeling Moderators in Psychological Networks [Doctoral dissertation]. Retrieved November 14, 2023, from https://www.proquest.com/openview/d151ab6b93ad47e3f0d5e59d7b6fd3d3

Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory assessment. Annual Review of Clinical Psychology, 9, 151–176. https://doi.org/10.1146/annurev-clinpsy-050212-185510

Trull, T. J., & Ebner-Priemer, U. W. (2020). Ambulatory assessment in psychopathology research: A review of recommended reporting guidelines and current practices. Journal of Abnormal Psychology, 129(1), 56–63. https://doi.org/10.1037/abn0000473

Tuin, S. van der, Balafas, S. E., Oldehinkel, A. J., Wit, E. C., Booij, S. H., & Wigman, J. T. W. (2022). Dynamic symptom networks across different at-risk stages for psychosis: An individual and transdiagnostic perspective. Schizophrenia Research, 239, 95–102. https://doi.org/10.1016/j.schres.2021.11.018

Van der Krieke, L., Jeronimus, B. F., Blaauw, F. J., Wanders, R. B., Emerencia, A. C., Schenk, H. M., Vos, S. D., Snippe, E., Wichers, M., Wigman, J. T., Bos, E. H., Wardenaar, K. J., & De Jonge, P. (2016). HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths. International Journal of Methods in Psychiatric Research, 25(2), 123–144. https://doi.org/10.1002/mpr.1495

Van Roekel, E., Heininga, V. E., Vrijen, C., Snippe, E., & Oldehinkel, A. J. (2019). Reciprocal associations between positive emotions and motivation in daily life: Network analyses in anhedonic individuals and healthy controls. Emotion, 19(2), 292–300. https://doi.org/10.1037/emo0000424

Wal, J. M. v. d., Borkulo, C. D. v., Haslbeck, J. M. B., Slofstra, C., Klein, N. S., Blanken, T. F., Deserno, M. K., Lok, A., Nauta, M. H., & Bockting, C. L. (2023). Differential impact of preventive cognitive therapy while tapering antidepressants versus maintenance antidepressant treatment on affect fluctuations and individual affect networks and impact on relapse: A secondary analysis of a randomised controlled trial. eClinicalMedicine, 66. https://doi.org/10.1016/j.eclinm.2023.102329

Watling, J., Pawlik, B., Scott, K., Booth, S., & Short, M. A. (2017). Sleep Loss and Affective Functioning: More Than Just Mood. Behavioral Sleep Medicine, 15(5), 394–409. https://doi.org/10.1080/15402002.2016.1141770

Weermeijer, J., Kiekens, G., Wampers, M., Kuppens, P., & Myin-Germeys, I. (2023). Practitioner perspectives on the use of the experience sampling software in counseling and clinical psychology. Behaviour & Information Technology, 1–11. https://doi.org/10.1080/0144929X.2023.2178235

Wichers, M., Hartmann, J. A., Kramer, I. M., Lothmann, C., Peeters, F., Bemmel, L. van, Myin-Germeys, I., Delespaul, P. H., Os, J. van, & Simons, C. J. (2011). Translating assessments of the film of daily life into person-tailored feedback interventions in depression. Acta Psychiatrica Scandinavica, 123(5), 402–403. https://doi.org/10.1111/j.1600-0447.2011.01684.x

Wichers, M., Groot, P. C., & Psychosystems. (2016). Critical Slowing Down as a Personalized Early Warning Signal for Depression. Psychotherapy and Psychosomatics, 85(2), 114–116. https://doi.org/10.1159/000441458

Wichers, M., Smit, A. C., & Snippe, E. (2020). Early warning signals based on momentary affect dynamics can expose nearby transitions in depression: A confirmatory single-subject time-series study. Journal for Person-Oriented Research, 6(1), 1–15. https://doi.org/10.17505/jpor.2020.22042

Wichers, M., Wigman, H., Bringmann, L., & Jonge, P. de. (2017). Mental disorders as networks: Some cautionary reflections on a promising approach. Social Psychiatry and Psychiatric Epidemiology, 52(2), 143–145.

Wright, A. G., & Woods, W. C. (2020). Personalized Models of Psychopathology. Annual Review of Clinical Psychology., 7(16), 49–74. https://doi.org/10.1146/annurev-clinpsy-102419-125032

Wright, A. G., & Zimmermann, J. (2019). Applied ambulatory assessment: Integrating idiographic and nomothetic principles of measurement. Psychological Assessment, 31(12), 1467–1480. https://doi.org/10.1037/pas0000685

Appendix: State space formulation of Me-Var(1) model

We will now give a general formulation of the state space model for a VAR(1) model with measurement error (ME-VAR(1)). Note that the state space framework allows for more general formulations, such as allowing for time-varying parameters (Chen et al., 2021) or dynamic factor analysis (Song & Ferrer, 2009). The measurement equation can be formulated as follows, with m representing the number of observed or manifest variables and q representing the number of latent variables.[9]

\( \begin{equation}
\begin{aligned}
\boldsymbol{y_t}&= \boldsymbol{d}+\boldsymbol{F}\boldsymbol{\eta_t} + \boldsymbol{\omega_t}\\
\boldsymbol{\omega_t} &\sim MvN(\boldsymbol{0,\Sigma_\omega}),
\end{aligned}
\label{eq:var4}
\tag{18}
\end{equation}
\)

where yt again contains the observed variables in a m ×1 vector, ηt is a q-dimensional vector of latent variables, also known as the states in the state space framework. Furthermore, d is a vector of m ×1 containing the intercepts of the observed variables and the matrix F of m × q that entails the factor loadings. The m × 1 vector ωt represents the measurement residuals, which represent a mix of a true, occasion-specific fluctuation in mood due to contextual effects (Castro-Alvarez et al., 2022) and measurement error, but are often referred to as just measurement error (see for more information Schuurman & Hamaker, 2019). The measurement residuals are assumed to be multivariate normally distributed with a covariance matrix Σω of m × m and means of zero.

The measurement model links the observational variables to the latent or state variables. The state or transition equation can be formulated as follows

\( \begin{equation}
\begin{aligned}
\boldsymbol{\eta_{t}}&= \boldsymbol{c}+\boldsymbol{\Phi}\boldsymbol{\eta_{t-1}} + \boldsymbol{\epsilon_t}\\
\boldsymbol{\epsilon_t} &\sim MvN(\boldsymbol{0,\Sigma_\epsilon}),
\end{aligned}
\label{eq:var5}
\tag{19}
\end{equation}
\)

in this case, the latent intercepts are represented in a q ×1 vector, and ηt represents a vector of latent variables (denoted by q ×1) known as states. These states are regressed on their own values at the previous time point ηt−1.[10] Respectively, Φ is the transition matrix of q × q linking the state from one time point to the next via the auto- and cross-lagged regression effects it entails. The q × 1 vector t representing the dynamic residuals at time t and are also assumed to be multivariate normally distributed with a covariance matrix \(\boldsymbol{\Sigma_\epsilon} \text{ of } q \times q\) and means of zero.

The VAR model where we estimate the mean instead of the intercept directly is a special case of the two state space equations (18) and (19). To do so we set the factor loadings of the matrix F to 1, we leave out the measurement residuals ωt, so we do not allow for separating the dynamic error from measurement error and we set the latent intercept c to zero. This leaves the intercept vector d as the only constant in the equations and with the dynamic residuals t having a mean of zero, this results in d becoming the mean µ of the process. This leads to Equations (13) and (14).