*Instaskills, n,** one paged quick learning so you can hit the ground running while on the job*

# More Statistical Understanding

Following on from the understanding basic statistics page, this page provides some information on calculating sample sizes, different types of regression models and other multi-variate analysis. This may be of use if you're doing surveillance, epidemiology or other modelling for health intelligence.

Multivariate analysis encompasses a variety of statistical methods used to analyse measurements on two or more variables. Regression analysis is a major subset of multivariate analysis that includes methods for predicting values of one or more response variables from one or more predictor variables.

A model is a description of a relationship connecting the variables of interest. It becomes a statistical model when it is fitted to sample data with the aim of generalising beyond the sample to the underlying population from which the sample was drawn.

You will come across the terms '

**Bayesian**' and '

**Frequentists**'. Bayesian methods make statements about the relative evidence for parameter values given a dataset. Frequentists compare relative chance of datasets given a parameter value. Bayesian statistics starts from what has been observed and assesses possible future outcomes. Frequentist (or classical) statistics starts with an abstract experiment of what would be observed if one assumes something, and only then compares the outcomes of the abstract experiment with what was actually observed. The key difference for me is that Bayesians say we have prior information about the outcome and use this information in their modelling. To illustrate, if you lose your car keys, a frequentist will use a model to determine the likelihood of where you lost it and infer which area you should search. A Bayesian will note the places you've been since last seeing your car keys and use this information to adapt the model and limit the areas where you should search.

##
Calculating Sample Size

- mean difference between treatment groups which you would like to be able to detect
- an estimate of the standard deviation within either group

- expected proportion with a good outcome under the experimental treatment
- expected proportion with a good outcome under the control treatment

*random sampling error*but this decreases when the sample size increases and completely disappears when we can study a full population (census). You can still have

*non-random errors*which can cause validity problems even when you have a full population data so it's important that your measures are valid and reliable. Self selection bias is another thing to be aware of: people can freely decide to participate in a study or not. Participants could end up being systematically different to those who didn't participate. Self-selection bias can be addressed through using 'missing data' analyses.

#### Sample Size for RCTs

- Difference in response rates (e.g. if the risk ratio is 0.88, than risk of outcome in the intervention group will be 88% of the control group percentage - 88% of 30% =26.4%)
- Response rate in 1 group
- Level of statistical significance (alpha) - usually 0.05 (5%)
- Power desired (1-beta) - usually at least 80%
- 1 sided or 2 sided - ratio of sample sizes in 2 treatment groups is usually 1:1
- For clusters (in cluster RCTs), you'll need the cluster size and estimate of Intra-cluster co-efficient (ICC).

__Note on clusters__: people are allocated by clusters rather than individually (e.g. vaccine trials). Patients within the same cluster may be more similar to each other than patients from different clusters (think intra and inter cluster variations). Measured by ICC. You need to allow for this similarity of patients from the same cluster when you analyse your data. It also affects the sample size required to achieve given power. ICC has a value between 0 and 1; 0=2 patients from the same cluster are no more similar than 2 patients from different clusters and 1= 2 patients from the same cluster have identical outcomes. Calculate sample size and multiple the required sample size by the design effect where design effect = 1 + [(k-1)XICC]; k=no of patients per cluster

#### Sample Size for Observational Studies

- Size of effect to be detected.
- Statistical significance level.
- Power of study (usually 0.8 or 0.9)
- Ratio of one group to the other (exposed versus unexposed; cases versus controls).

##
Main types of Statistical Analysis

#### Descriptive Statistical Analysis

This is the simplest form, using numbers to describe the qualities of a data set - e.g. mean, mode, median, frequencies, range, variation, standard deviation etc.

#### Inferential Statistical Analysis

This is used to make inferences or draw conclusions about a larger population based on the findings from a sample group.

#### Associational Statistical Analysis

This is used to make predictions and find causation. Can also be used to find relationships among multiple variables - e.g. correlation, regression, coefficients of variation.

#### Predictive Analysis

This uses statistical algorithms, computer simulation and machine learning tools to predict future events and behaviour based on new and historical data trends.

#### Exploratory Data Analysis

This is used to identify patterns and trends in a data set. Can also be used to determine relationships among samples in a population and find missing data points.

#### Causal Analysis

This is used to determine causation or why things happen the way they do. It can be used to uncover the underlying factors that led to an event.

## Types of Regression Models

**- this is the simplest form is the bivariate linear regression, involving a straight line relationship between one response (dependent or regressor) variable and one predictor (independent or explanatory) variable.**

Generalised (Least Squares) Linear Model or bivariate linear regression

Generalised (Least Squares) Linear Model or bivariate linear regression

**-this extends the bivariate linear regression to include more than one predictor variable.**

Multiple regression

Multiple regression

**- categorical data require special attention in regression analysis because, unlike dichotomous or continuous variables, they cannot by entered into the regression equation just as they are. Instead, they need to be recoded into a series of variables which can then be entered into the regression model. Types of linear models:**

Linear Models for Categorical Data

Linear Models for Categorical Data

- used when the response variable is binary (rather than continuous). Models how the logarithm of the odds of having a particular characteristic varies with the values of the predictor variables.**â€ƒLogit/logistic/multinomial logistic**- this is used when we only have group level data and the data take the form of a contingency table. The dependent variable is the number of cases in a cell of a table.**â€ƒLoglinear**

**survival models are applied to data that specify the time elapsed until an event occurs. This concept of 'time elapsed' implies a starting event and a terminating event (e.g. birth and death, divorce and remarriage). Survival times are the observed times from the initiation of a process of interest (e.g. birth) and the occurrence of the event of interest (e.g. death). In practice, time is always measured in discrete units. When discrete units are very small, time can be treated as if it were measured on a continuous scale. When larger (months, years or decades), it is more appropriate to use discrete time methods). Events consists of some qualitative change that occurs at a specific point in time and does not refer to a gradual change. An event history is a longitudinal record of when events happened to a sample of individuals or collectives. It's worth noting the term 'censoring' which means here as 'lost to observation'. Left censoring is when the event happens prior to the observational period and right censoring when the subject has not had the event when the observational period is terminated.**

**Survival and Event History Analysis -**

this is a statistical presentation (table or spreadsheet) of the life history of a cohort, commencing with the starting event, as the cohort is progressively thinned out over time by failures (i.e. terminating events). A life table is a basic building block for hazards models. There are two ways for calculating a life table: (1) actuarial method and (2) product limit model (used in estimation of hazard models and is also known as the Kaplan-Meier life table; survival function is calculated at each unique failure time). The actuarial method is often used by life insurance companies as they can show the probability of a person at a certain age dying before their next birthday (they are often called mortality tables). These statistics calculate the remaining life expectancy for people at different ages and stages and the probability of surviving a particular year of age. Actuarial life tables are computed separately for men and women as they have different mortality rates.-**â€ƒLife Tables**-**â€ƒ****Cox Proportional Hazards Regression**this could be viewed as multivariate life table where the hazard is a function of time and other specified predictor variables, such as residence and education.-**â€ƒKaplan-Meier Curve**used to study the patterns and correlates of the occurrences of events (marriages, deaths, becoming unemployed etc).-**â€ƒDiscrete Time Model**

**- hierarchical regression analysis, designed to handle hierarchical and cluster data; looks at group effects on individuals when grouping is present. Types of models include:**

**Multilevel models**

**â€ƒHierarchical linear modelling****â€ƒRandom coefficients modelling (RC)****â€ƒCovariance components models**

**related to multiple regression and is a technique for examining the interrelationship between several predictor variables and one dependent variable in the context of an additive model. Best explained as multiple regression with dummy variables! Response variable is quantitative and predictive variables are categorical represented by dummy variables.**

**Multiple Classification Analysis (MCA)**-

**- related to logistic regression**

Discriminant Function Analysis or Latent Class Analysis

Discriminant Function Analysis or Latent Class Analysis

Causality

- â€ƒ
**Cluster Analysis**- aim is the detection of patterns or indications of potentially interesting relationships in the data. Only when some pattern is thought to exist can the further steps be taken of setting up models and hypotheses for future investigation. The results are produced in the form of a graph or some other type of visual display. **â€ƒFactor Analysis & Covariance Structure Models (path analysis/LISREL models)**- two methods of testing latent models. Latent variables are often theoretical concepts such as intelligence, which cannot be directly measured or cannot be measured without error. We have to make measurements using variables that are assumed to be indicators of the concepts that we are interested in. Factor analysis is a regression model for the observed variables on the unobserved latent variables or factors. There are two types: Exploratory Factor Analysis where the detailed model rating the latent to the observed is not determined before the analysis and Confirmatory Factor Analysis where the number of latent variable is set by the analyst.**â€ƒStructural Equation Modelling**- these look at tentative casual relations between a set of latent dependent and latent independent variables.

Causality

**â€ƒPath Analysis**- can be seen as an extension of the ordinary regression model. It analyses how a predictor variable affects the response variable not only directly but also indirectly through one or more intervening variables. First step is to portray it in a diagram with arrows indicating direction of causality.**â€ƒGraphic Chain Modelling**- used to understand the causal structure underlying the dependence among variables. Variables are grouped into response, intermediate and explanatory variables. Intermediate variables can be treated as response to some variables and explanatory for others. Arrows point from explanatory variables to response variables.- Worth noting that to establish causality, you need to be able to prove
X came before Y (temporal priority of the independent variable), that the observed relationship between X and Y didn't happen by chance alone (non-spurious), and that there is nothing else that accounts for the X -> Y relationship (empirical association).
- Strength
- Consistency
- Specificity
- Temporality
- Biological gradient
- Coherence
- Experimental evidence
- Analogy
- Plausibility