Last session, we talked about the foundations of sampling and inference. Remember that we typically deal with a sample of data that is designed to be a representative of a population. This sample, however, is a finite sample of what could be a very large population. Recall as well that our business purpose focuses on the entire population, not just the sample. The goal is to gain a valid understanding of a population of customers based on the data from a finite sample.

Itâ€™s a must, therefore, to make inferences about the entire population using the data that we have. To make sure that the inferences are valid, we must assume that the sample is correct and representative of the population. This is why we carry out simple random sampling. This sampling technique allows all the entities in the population to have an equal chance of being selected as part of the sample, which in turn increases the chance that the sample is more representative of the population.

## Looking Back at Sampling Distribution

A sampling distribution is a statistical concept that describes the distribution of a given statistic, such as the mean or standard deviation , based on multiple samples drawn from a specific population. It is representative of the randomness of the situation where different samples can give different values of the statistic. In short, every time a different sample is drawn, there is a different value of the sample statistic. This concept helps in understanding the variability of sample statistics and is crucial for making inferences about the population.

Take note:

- The mean of the sampling distribution of the sample mean (X-bar) is equal to the population mean .
- The standard deviation of the sampling distribution, which is also known as the standard error of X-bar, is given by where is the population standard deviation and is the sample size.

To illustrate sampling distribution, letâ€™s use a practical example involving customer satisfaction scores for a product. In this scenario, you want to understand the average customer satisfaction score for a product sold by your company. Instead of surveying all customers, you take multiple random samples of customer satisfaction scores.

**Step 1:** Take 5 different random samples of 30 customer satisfaction scores each. Suppose the population mean satisfaction score is 80 with a population standard deviation of 10.

**Step 2:** Compute the mean satisfaction score for each of the 5 samples, each with a mean score of 78, 82, 81, 79, and 83.

**Step 3:** Plot the 5 sample means on a graph. As you take more samples, the distribution of these sample means will form a shape.

**Step 4:** Analyze the Distribution:

- The mean of this sampling distribution will be an estimate of the population mean (average satisfaction score for all customers).
- The spread (standard error) of this sampling distribution tells you how much the sample mean is expected to vary from the population mean.

In summary, sampling distribution helps in understanding the variability of sample statistics and is crucial for making inferences about the population.

## Central Limit Theorem

Letâ€™s proceed with the focus of todayâ€™s discussion, which is the Central Limit Theorem (CLT). Essentially, this theorem states that the sampling distribution of the sample means will approach normal distribution as the sample size gets bigger, no matter what the shape of the population distribution is. In other words: as the sample gets larger, the distribution of the mean becomes more and more normal.

This averaging effect is due to the fact that small and larger values of the population are represented in the sample and therefore cancel each other out, ensuring that the average of the data falls on the normal distribution.

This theorem works with the following assumptions:

- The data must be randomly sampled.
- The sample values must be independent of each other.
- The samples should come from the same distribution.
- The sample size must be sufficiently large .

## Applying the CLT

The CLT is crucial because it allows us to make inferences about population parameters even when the population distribution is unknown. The theorem can be used in the following applications:

– You can estimate the population mean and make predictions about the population from the sample mean.*Estimation and Inference*– Using the sample mean and standard error, you can construct confidence intervals to estimate the range in which the population mean is likely to fall.*Confidence Intervals*– CLT provides the foundation for many statistical tests that compare sample data to population parameters or other samples.*Hypothesis Testing*

Suppose you are a business analyst interested in understanding the average sales per day for a large retail chain. Itâ€™s impractical to collect sales data for every day across all stores (the population). Instead, you can take multiple random samples of daily sales from different stores (samples).

**Step 1**: Take multiple random samples of daily sales data from different stores.

**Step 2**: Compute the mean sales for each sample.

**Step 3**: Plot the sample means. According to the CLT, this plot will form a normal distribution, even if the original sales data (population) is not normally distributed.

**Step 4**: Use sample means in the following ways:

- The mean of these sample means will approximate the true average sales per day for the entire retail chain.
- The standard error gives you an idea of how much the sample mean is likely to vary from the population mean.

To summarize, the Central Limit Theorem is a powerful tool in statistics that underpins many aspects of data analysis, especially inferential statistics. It enables analysts to make predictions and decisions about a population based on sample data while using the normal distribution as a reference, regardless of the original populationâ€™s distribution shape. This theorem justifies the use of the normal distribution in a wide range of statistical procedures, making it a cornerstone of statistical theory and practice.