This document discusses applying control systems engineering principles to behavioral interventions for complex, dynamic problems with imperfect knowledge. It outlines encapsulating previous theoretical knowledge, defining the dynamic decisions of an intervention, devising a system identification experiment, examining individual differences through black-box modeling, examining mechanistic models, and devising a model-predictive controller. A pilot study collected step data from Fitbit users to identify individual models and test model-predictive control of recommended step goals. The approach aims to systematically manage imperfect knowledge to support dynamic, evidence-based intervention decisions tailored to individuals.
Cultivation of KODO MILLET . made by Ghanshyam pptx
Applying control systems engineering to behavioral interventions
1. Making decisions for complex, dynamic
problems with imperfect knowledge
The application of control systems engineering
to a behavioral intervention
@ehekler
Eric Hekler, PhD
Arizona State University
August 18, 2016 Flickr -Pat Castaldo
1
6. Specific Solutions
for Specific Problems
Design &
Engineering
“On Average”
Science
“On Average” Evidence
for General Problems
Key
Traditional
pathway
Emerging
pathway
Product
Process
Professional-led
6
@ehekler
7. Specific Solutions
for Specific Problems
Design &
Engineering
“On Average”
Science
“On Average” Evidence
for General Problems
Key
Traditional
pathway
Emerging
pathway
Product
Process
Precise Evidence
for Specific Problems
Personalization
Algorithm
Science
Professional-led
Process
Individualization
Science
7
@ehekler
8. Specific Solutions
for Specific Problems
Design &
Engineering
“On Average”
Science
“On Average” Evidence
for General Problems
Key
Traditional
pathway
Emerging
pathway
Product
Process
Precise Evidence
for Specific Problems
Personalization
Algorithm
Science
Professional-led
Process
Individualization
Science
Citizen/Patient-led
8
@ehekler
9. Specific Solutions
for Specific Problems
Design &
Engineering
“On Average”
Science
“On Average” Evidence
for General Problems
Key
Traditional
pathway
Emerging
pathway
Product
Process
Precise Evidence
for Specific Problems
Personalization
Algorithm
Science
Professional-led
Process
Individualization
Science
9
@ehekler
11. Control Systems Engineering
NSF IIS-1449751: EAGER: Defining a Dynamical Behavioral
Model to Support a Just in Time Adaptive Intervention, PIs, Hekler & Rivera
@ehekler
11
12. Describe & predict: System identification
-100
100
300
500
700
900
1100
1300
1500
0
2000
4000
6000
8000
10000
12000
14000
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99
Points
Stepsperday
Days
Points Provided (100, 300, 500)
Fictionalized actual steps per day
Daily step goal ((Baseline Median) to (Baseline Median+100% Baseline Median))
NSF IIS-1449751: Defining a Dynamical Behavioral Model to Support
a Just in Time Adaptive Intervention, PIs, Hekler & Rivera@ehekler
12
13. Martin, Rivera, & Hekler Am. Control Conference (2015)
Control: Model-predictive control
@ehekler
13
19. Differential equations (first order shown)
Riley, Martin, Rivera, Hekler, et al. 2016; Martin, Riley, Rivera, Hekler, et al. 2014@ehekler
19
20. Simulation: Low vs. high self-efficacy
Riley, Martin, Rivera, Hekler, et al. 2016; Martin, Riley, Rivera, Hekler, et al. 2014@ehekler
Low Self-Efficacy
High Self-Efficacy
20
32. Participants
• 22 inactive, overweight Android users
– BMI 33.7 ± 6.7
– 47 ± 6.2 years
– 87% women
Living anywhere in the US
Average Baseline Median Steps: 4972 steps/day
(SE = 482)
32
@ehekler
33. Preliminary results: Average effects
6,827 (SE = 647) Average median steps in the last cycle
45% (SD = 36) Average increase in median steps/day from
baseline to final cycle
69% (SD = 24) Average goals met
>90% Adherence to daily self-report
33
@ehekler
47. Specific Solutions
for Specific Problems
Design &
Engineering
“On Average”
Science
“On Average” Evidence
for General Problems
Key
Traditional
pathway
Emerging
pathway
Product
Process
Precise Evidence
for Specific Problems
Personalization
Algorithm
Science
Professional-led
Process
Individualization
Science
47
@ehekler
48. From “in general” to “for me”
Decisions for complex, dynamic problems
Manage & mitigate imperfect knowledge
@ehekler
48
49. Feedback and questions welcome!
Dr. Eric Hekler, Arizona State University
ehekler@asu.edu, @ehekler 49
Editor's Notes
The talk will briefly set up the current context for mHealth / UbiComp / digital health research efforts as seen from various disciplinary lenses. Following this, the precision medicine initiative will be discussed followed by a discussion on one subclass of prevention interventions, labeled precision behavior change, which could fit well within the precision medicine initiative. Following the definition of precision behavior change, transdisciplinary research questions, with a particular focus on attempting to articulate intellectual merit and contributions for each discipline when exploring the research questions, will be discussed. The talk will conclude with plausible next steps to spur conversation among the webinar participants and later viewers on ways to refine this transdisciplinary research agenda to see if it is viable and, if so, how best to more actively enable it as an organizing “moon shot” agenda for the mHealth research community.
The talk will briefly set up the current context for mHealth / UbiComp / digital health research efforts as seen from various disciplinary lenses. Following this, the precision medicine initiative will be discussed followed by a discussion on one subclass of prevention interventions, labeled precision behavior change, which could fit well within the precision medicine initiative. Following the definition of precision behavior change, transdisciplinary research questions, with a particular focus on attempting to articulate intellectual merit and contributions for each discipline when exploring the research questions, will be discussed. The talk will conclude with plausible next steps to spur conversation among the webinar participants and later viewers on ways to refine this transdisciplinary research agenda to see if it is viable and, if so, how best to more actively enable it as an organizing “moon shot” agenda for the mHealth research community.
Professionals still focus on “on average” science (even, it appears, with many precision medicine efforts)
Professionals need to move towards studying the utility of personalization algorithms
Decision Policies – we are talking about what this is supposed to do
Citizens= Patients, Providers, and anyone else driven to solve a problem that the individual
has first-hand experience with.
Decision Policies – we are talking about what this is supposed to do
Citizens= Patients, Providers, and anyone else driven to solve a problem that the individual
has first-hand experience with.
Decision Policies – we are talking about what this is supposed to do
Citizens= Patients, Providers, and anyone else driven to solve a problem that the individual
has first-hand experience with.
Decision Policies – we are talking about what this is supposed to do
Citizens= Patients, Providers, and anyone else driven to solve a problem that the individual
has first-hand experience with.
Myc olleauge, Daniel Rivera, and I have been extending this further using methods fromcontrol systems engineering to develop experimental designs that take more advantage of a priori knowledge than the micro-randomization study. In the discussion section, I’d be happy to get into details on these experimental designsbut for the focus of this, the main point is to realize that this is a huge shift in the behavioral science community away from ideas like RCTs nad instead towards methods that embrace and map out idiosyncracy.
Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
Coming to how the daily goal signal was designed:
Then developed an experimental design
In control systems world, this methodology is called system identification.
It is to test this hypothesis,
Estimate and validate the dynamical model.
Focus is on idiographic modeling, individual model per participant.
System ID experiments are specifically designed to estimate and validate the dynamical model, and the focus is on idiographic models, meaning individual models per participant or user.
Every day, a step goal (external cues), and points is assigned to the participant (outcome expectancy for reinforcement). Step goals range from doable (baseline median), to ambitious (up to 2.5x baseline).
Each individual has her own unique randomization signal
This strategy also uses “cycles” of the intervention, for us, we used 16 day cycles. So the same randomization signal repeats every 16 days.
The randomization signal is determined using multisine wave design strategies, which maximizes the signal to noise, delievered orthogonally in frequency, and is useful for progressively testing model fit, thus making it valuable for understanding how dynamics change over time for an individual.
Multisine signal design utilizes periodic signals defined in the frequency domain to implement an open-loop experiment (see C.2.1). A useful analogy is an audio equalizer whereby different frequencies like bass or treble can be emphasized; “frequencies” occurring as cycles across time can be used to design an experimental signal (e.g., daily goal variations).
One thing to note here, this was not a perpetually adaptive or personalized intervention, it was mainly designed to understand the dynamics and build individualized computational models.
Pseudo-randomly assigns daily goals and points to every participant
We ran our pilot study from June to December 2015, and the quick summary so that everything makes sense is that the study was 14 weeks long, participants received a Fitbit, an Android app, and a daily step goal and we measured many contextual variables that were informed from the SCT…on a daily/weekly/monthly level depending on the variables.
Just Walk is the system that we developed for running the experiment.
It includes a front-end Android app
Notifications of daily step goal and corresponding points
Notifications when user achieves goals
Integration with Fitbit which is used to measure the steps
Daily Morning and evening self-report
We also collected weather, and location data but have not analysed that yet.
We recruited 22 inactive, overweight Android users (one lost her Fitibit during last three weeks but was willing to continue but this compromised system ID analyses for her as our power calculations required a minimum of 5 cycles; final sample N=21; 90% women; M = 47.0 ± 6.2 years, BMI 33.7 ± 6.7). Baseline median steps averaged 4,972 steps/day (SE = 482), and median steps in the last cycle were 6,827 steps/day (SE = 647). By design, there was an average 45% (SD = 36%) increase in steps/day from baseline to the last cycle, and participants met 69% (SD = 24%) of goals. Results from a nonlinear mixed effects model indicated a significant, on average, increase in steps from baseline to the first intervention cycle of 1,500 steps (t=-5.52, p<.001) with a significant quadratic effect (t=-5.01, p<.001), suggesting the increased steps largely leveled off by the 3rd cycle, which, again, is according to design, which did not include any progressive increase in step goals. Exit interviews and follow-up surveys suggested that participants liked getting different daily goals (100%), perceived the app to be easy-to-use (85%) and expressed interest in continued use of the app (88%). The most common problem was a time lag in syncing between Fitbit and Just Walk, which will be addressed in the next version. Adherence to EMA was above 90% for both morning and evening surveys.
Blackbox modeling is the first step in the system id analyses. We used goals, points, and some of our self-report measures as inputs to predict daily steps in this procedure.
The primary interest here is to fit the data regardless of a particular structure of the model. So this is not considering the SCT model structure when conducting the analyses
Typically a trial and error process where you estimate the parameters of various structures and compare results
Minimal knowledge of the structure is used – so used an autoregressive model structure. (consistent estimation with probability of 1)
What we have been currently doing as part of the blackbox modeling is finding the best fitting model for all participants...as I mentioned earlier, there are various ways to go about this and we carried out an exhaustive search looking over every possible ARX structure (output and input lags), and this is a trial and error process...
Used all combinations of cycles for estimation and validation, and then obtained
We have been trying to find ties to the statistical methods we use in the social sciences for this process..such as checking assumptions. To try and bring structure into interpreting and choosing the best models for each participants in a way that they are also reliable. This is our first pass at this...
In choosing these models, we have looked at the best average validation fits (using roughly 50-50 estimation/validation), and cross-correlations between the inputs.
We looked at cross-correlations amongst the inputs to try and use only orthogonal signals. So we have removed those signals that were highly correlated to choose the most parsimpnious models.
We also tried to maintain inter-rater reliability by having two different individuals go over the model-choosing process.
We will be able to properly validate these models only when they enter a controller/ when we do the semi-physical modeling which uses the SCT model structure.
Orthogonal inputs
Autoregressive
A portion of the
Only 1 participant below 10% model fit, suggesting “good enough” model fit for 95% of our sample
For all combinations of cycles as esti and vali. Sets, you chose the best ARX structure (most predictive) for that combination.
Model fit per cycle (in the validation set), and then average over that.
illustrates the results obtained from a specific participant utilizing (60%) of cycles for estimation and (40%) for validation. A parsimonious ARX model with 5 inputs and 12 parameters, with model fit corresponding to 49% of validation output variance. More accurate results are expected using semi-physical model estimation that incorporates the SCT model structure
Based on this, we need to move more into an open discussion in which we explore lots and lots of different ideas if we really want to understand which ones are best.
Sadly, science, particularly behavioral science doesn’t really have the sort of “maker” culture that would allow us. As such, a key emphasis.
Decision Policies – we are talking about what this is supposed to do
Citizens= Patients, Providers, and anyone else driven to solve a problem that the individual
has first-hand experience with.
The talk will briefly set up the current context for mHealth / UbiComp / digital health research efforts as seen from various disciplinary lenses. Following this, the precision medicine initiative will be discussed followed by a discussion on one subclass of prevention interventions, labeled precision behavior change, which could fit well within the precision medicine initiative. Following the definition of precision behavior change, transdisciplinary research questions, with a particular focus on attempting to articulate intellectual merit and contributions for each discipline when exploring the research questions, will be discussed. The talk will conclude with plausible next steps to spur conversation among the webinar participants and later viewers on ways to refine this transdisciplinary research agenda to see if it is viable and, if so, how best to more actively enable it as an organizing “moon shot” agenda for the mHealth research community.