To make sense of their data and effects, scientists might want to standardize (Z-score) their variables. This makes the data unitless, expressed only in terms of deviation from an index of centrality (e.g., the mean or the median). However, aside from some benefits, standardization also comes with challenges and issues, that the scientist should be aware of.

The `datawizard`

package offers two methods of standardization via the `standardize()`

function:

**Normal standardization**: center around the*mean*, with*SD*units (default).**Robust standardization**: center around the*median*, with*MAD*(median absolute deviation) units (`robust = TRUE`

).

Let’s look at the following example:

```
library(datawizard)
library(effectsize) # for data
# let's have a look at what the data look like
data("hardlyworking", package = "effectsize")
head(hardlyworking)
# let's use both methods of standardization
$xtra_hours_z <- standardize(hardlyworking$xtra_hours)
hardlyworking$xtra_hours_zr <- standardize(hardlyworking$xtra_hours, robust = TRUE) hardlyworking
```

We can see that different methods give different central and variation values:

```
library(dplyr)
library(tidyr)
%>%
hardlyworking select(starts_with("xtra_hours")) %>%
pivot_longer(everything()) %>%
group_by(name) %>%
summarise(
mean = mean(value),
sd = sd(value),
median = median(value),
mad = mad(value)
%>%
) ::kable(digits = 4) knitr
```

`standardize()`

can also be used to standardize a full data frame - where each numeric variable is standardized separately:

`<- standardize(hardlyworking) hardlyworking_z `

```
%>%
hardlyworking_z select(-xtra_hours_z, -xtra_hours_zr) %>%
pivot_longer(everything()) %>%
group_by(name) %>%
summarise(
mean = mean(value),
sd = sd(value),
median = median(value),
mad = mad(value)
%>%
) ::kable(digits = 4) knitr
```

Weighted standardization is also supported via the `weights`

argument, and factors can also be standardized (if you’re into that kind of thing) by setting `force = TRUE`

, which converts factors to treatment-coded dummy variables before standardizing.

Standardization is an important step and extra caution is required in **repeated-measures designs**, in which there are three ways of standardizing data:

**Variable-wise**: The most common method. A simple scaling of each column.**Participant-wise**: Variables are standardized “within” each participant,*i.e.*, for each participant, by the participant’s mean and SD.**Full**: Participant-wise first and then re-standardizing variable-wise.

Unfortunately, the method used is often not explicitly stated. This is an issue as these methods can generate important discrepancies (that can in turn contribute to the reproducibility crisis). Let’s investigate these 3 methods.

We will take the `emotion`

dataset in which participants were exposed to negative pictures and had to rate their emotions (**valence**) and the amount of memories associated with the picture (**autobiographical link**). One could make the hypothesis that for young participants with no context of war or violence, the most negative pictures (mutilations) are less related to memories than less negative pictures (involving for example car crashes or sick people). In other words, **we expect a positive relationship between valence** (with high values corresponding to less negativity) **and autobiographical link**.

Let’s have a look at the data, averaged by participants:

```
library(dplyr)
library(tidyr)
# Download the 'emotion' dataset
load(url("https://raw.github.com/neuropsychology/psycho.R/master/data/emotion.rda"))
# Discard neutral pictures (keep only negative)
<- emotion %>% filter(Emotion_Condition == "Negative")
emotion
# Summary
%>%
emotion drop_na(Subjective_Valence, Autobiographical_Link) %>%
group_by(Participant_ID) %>%
summarise(
n_Trials = n(),
Valence_Mean = mean(Subjective_Valence),
Valence_SD = sd(Subjective_Valence)
)
```

As we can see from the means and SDs, there is a lot of variability **between** participants both in their means and their individual *within*-participant SD.

We will create three data frames standardized with each of the three techniques.

```
<- emotion %>%
Z_VariableWise standardize()
<- emotion %>%
Z_ParticipantWise group_by(Participant_ID) %>%
standardize()
<- emotion %>%
Z_Full group_by(Participant_ID) %>%
standardize() %>%
ungroup() %>%
standardize()
```

Let’s see how these three standardization techniques affected the **Valence** variable.

We can calculate the mean and SD of *Valence* across all participants:

```
# Create a convenient function to print
<- function(data) {
summarise_Subjective_Valence <- deparse(substitute(data))
df_name %>%
data ungroup() %>%
summarise(
DF = df_name,
Mean = mean(Subjective_Valence),
SD = sd(Subjective_Valence)
)
}# Check the results
rbind(
summarise_Subjective_Valence(Z_VariableWise),
summarise_Subjective_Valence(Z_ParticipantWise),
summarise_Subjective_Valence(Z_Full)
%>%
) ::kable(digits = 2) knitr
```

The **means** and the **SD** appear as fairly similar (0 and 1)…

```
library(see)
library(ggplot2)
ggplot() +
geom_density(aes(Z_VariableWise$Subjective_Valence,
color = "Z_VariableWise"
size = 1) +
), geom_density(aes(Z_ParticipantWise$Subjective_Valence,
color = "Z_ParticipantWise"
size = 1) +
), geom_density(aes(Z_Full$Subjective_Valence,
color = "Z_Full"
size = 1) +
), ::theme_modern() +
seelabs(color = "")
```

and so do the marginal distributions…

However, we can also look at what happens in the participant level. Let’s look at the first 5 participants:

```
# Create convenient function
<- function(data) {
print_participants <- deparse(substitute(data))
df_name %>%
data group_by(Participant_ID) %>%
summarise(
DF = df_name,
Mean = mean(Subjective_Valence),
SD = sd(Subjective_Valence)
%>%
) head(5) %>%
select(DF, everything())
}
# Check the results
rbind(
print_participants(Z_VariableWise),
print_participants(Z_ParticipantWise),
print_participants(Z_Full)
%>%
) ::kable(digits = 2) knitr
```

Seems like *full* and *participant-wise* standardization give similar results, but different ones than *variable-wise* standardization.

Let’s do a **correlation** between the **variable-wise and participant-wise methods**.

```
<- cor.test(
r $Subjective_Valence,
Z_VariableWise$Subjective_Valence
Z_ParticipantWise
)
data.frame(
Original = emotion$Subjective_Valence,
VariableWise = Z_VariableWise$Subjective_Valence,
ParticipantWise = Z_ParticipantWise$Subjective_Valence
%>%
) ggplot(aes(x = VariableWise, y = ParticipantWise, colour = Original)) +
geom_point(alpha = 0.75, shape = 16) +
geom_smooth(method = "lm", color = "black") +
scale_color_distiller(palette = 1) +
ggtitle(paste0("r = ", round(r$estimate, 2))) +
::theme_modern() see
```

While the three standardization methods roughly present the same characteristics at a general level (mean 0 and SD 1) and a similar distribution, their values are not exactly the same!

Let’s now answer the original question by investigating the **linear relationship between valence and autobiographical link**. We can do this by running a mixed-effects model with participants entered as random effects.

```
library(lme4)
<- lmer(
m_raw formula = Subjective_Valence ~ Autobiographical_Link + (1 | Participant_ID),
data = emotion
)<- update(m_raw, data = Z_VariableWise)
m_VariableWise <- update(m_raw, data = Z_ParticipantWise)
m_ParticipantWise <- update(m_raw, data = Z_Full) m_Full
```

We can extract the parameters of interest from each model, and find:

```
# Convenient function
<- function(model) {
get_par <- deparse(substitute(model))
mod_name ::model_parameters(model) %>%
parametersmutate(Model = mod_name) %>%
select(-Parameter) %>%
select(Model, everything()) %>%
-1, ]
.[
}
# Run the model on all datasets
rbind(
get_par(m_raw),
get_par(m_VariableWise),
get_par(m_ParticipantWise),
get_par(m_Full)
)
```

As we can see, **variable-wise** standardization only affects **the coefficient** (which is expected, as it changes the unit), but not the test statistic or statistical significance. However, using **participant-wise** standardization *does* affect the coefficient **and** the significance.

**No method is better or more justified, and the choice depends on the specific case, context, data and goal.**

**Standardization can be useful in**.*some*cases and should be justified**Variable and Participant-wise standardization methods**.*appear*to produce similar data**Variable and Participant-wise standardization can lead to different results**.**The chosen method can strongly influence the results and should therefore be explicitly stated and justified to enhance reproducibility of results**.

We showed here yet another way of **sneakily tweaking the data** that can change the results. To prevent its use as a bad practice, we can only highlight the importance of open data, open analysis/scripts, and preregistration.

`datawizard::demean()`

: https://easystats.github.io/datawizard/reference/demean.html`standardize_parameters(method = "pseudo")`

for mixed-effects models https://easystats.github.io/effectsize/reference/standardize_parameters.html