Lab Homework:
#Use the read.table function to load the data from lab8hw.txt and store as object named hw
#Submit all plots
#1.1) Create a scatter plot of iq on y axis and score on X axis. How do the variables appear to be related?
#1.2) Conduct a linear regression of iq and score
#1.3) Do you reject or fail to reject the null hypothesis about the slope? Why?
#1.4) What is the interpretation of the coefficient for the slope in #1.3?
#1.5) Calculate the correlation coefficient for iq and score
#1.6) Calculate the R-squared from the correlation coefficient. What is the interpretation for this R-squared?
#1.7) Add the regression line to the plot created in #1.1
#1.8) Based on what you see in #1.7, do you have any concerns about the results? Why or why not?
#1.9) Create a dataset hm_iq_score that is a new version of hw but without outliers in iq & score columns. Use the command out. Also, create a regression line for this new data set.
#1.10) Plot again the data in 1.1, add the regression lines found in 1.7 and 1.9 (use different colors to plot those lines). Explain why the regression lines look either very similar or very different.
————————————————————————————————————————-
Lab lecture notes from class for your reference:
#Lab 8-Contents
#1. Scatter Plots in R
#2. Linear Regression in R
#3. Outliers in Regression
#4. Hypothesis testing in Regression
#5. Correlation and R-Squared in R
#6. Outliers Revisited
#———————————————————————————
# 1. Scatter Plots in R
#———————————————————————————
#Previously we’ve looked at various plots in R.
#Today we are going to learn how to do a scatter plot in R.
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# Scatter Plot: plot(x=data$variable, y=data$variable)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
#Let’s start by reading in the lab8a.txt file.
a=read.table(‘lab8a.txt’, header=T)
a #The data “a” contains variables named X and Y variables
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Exercise 1-1:
# A) Create a scatter plot for the variables in a.
# Put X on the x-axis and Y on the y-axis
# B) What does the scatter plot look like? Is it linear?
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#A)
plot(y=a$Y, x=a$X)
#B)
#Looks kinda linear
#———————————————————————————
# 2. Linear Regression in R
#———————————————————————————
#R has a function that computes the regression
#of Y on X (Best fit line).
#Linear Regression is just the function of y=Mx + b,
#there is a slope and intercept.
#In Linear Regression, we re-write this function as y=?x + a
#??????????????????????????????????????????????????????????????#
#Thought Question 1: In the equation of y=?x + a,
#which is the slope and which is the intercept term.
#??????????????????????????????????????????????????????????????#
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# Linear Regression: lm(data$variable ~ data$variable)
# lm(outcome/dependent variable ~ predictor/independent variable/determinant)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
#If we wanted to find the best fit line for our data
#we could use the linear regression function:
lm(a$Y~a$X)
#How can we interpret the values we get?
#Intercept = -0.3436 #When x is zero,
#the mean value of y is -0.3436
#Slope = 1.1153
#For a 1 unit increase in x, y increases by 1.1153 points
#??????????????????????????????????????????????????????????????#
#Thought Question 2: How would we interpret the slope if the
#coefficent had been negative? eg. -1.1153
#??????????????????????????????????????????????????????????????#
#———————————————————————————
# 3. Outliers in Regression
#———————————————————————————
#One of the concerns we should have about the data in the
# previous section is that there are outliers in the
#original data. Let’s trim the outliers to see
#how this affects our regression lines.
#I’ll re-plot the data
plot(y=a$Y, x=a$X)
#I’m also going to use a new command to identify
#the rows where outliers occur.
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# Give Row info for plots: identify(data)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
identify(a)
#On the plot, we can click on the outliers to figure out
#what row the outliers occur on
#Then they are row 20 and 8
#Now, we can close the plot we created
#and let’s go back and plot our data,
#but now by adding the regression line
#To add the regression line,
#I’ll store the results of the linear regression
#into an object called m1 (model1)
m1=lm(a$Y~a$X)
#We can then re-draw the plot
plot(y=a$Y, x=a$X)
#And use the abline() function to add the regression line
abline(m1)
# Now, in order to see the effects of the outliers
# I might like to see the regression lines from data
#where the outliers have been removed.
#I’ll create some other versions of the dataset “a”
#that does just that.
a8=a[-8,] #Does not contain row 8
a20=a[-20,] #Does not contain row 20
a8_20=a[c(-8,-20),] #Does not contain row 8 and 20
#I can then run the regressions on these limited datasets.
m2=lm(a8$Y~a8$X)
m3=lm(a20$Y~a20$X)
m4=lm(a8_20$Y~a8_20$X)
#And then plot all the regression lines on the plot.
plot(y=a$Y, x=a$X)
abline(m1, col=”black”)
abline(m2, col=”red”)
abline(m3, col=”green”)
abline(m4, col=”blue”)
#———————————————————————————
# 4. Hypothesis testing in Regression
#———————————————————————————
#In regression, or goal in general is to find out
#if two variables are related to each other
#This is indicated to us when two variables
#do not have a slope of 0.
#Then, in regression, our Null and Alternative Hypotheses are:
# H0: Beta_1 = 0
# HA: Beta_1 different from 0
#We can test the null hypothesis here by using
#the “summary()” command on our MODELS
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# summary of results: summary(model)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Example: If I wanted to know if our original model
#without removing outliers had slope of 0
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
summary(m1)
#We then compare the p-value to our alpha level
#A) If pval < alpha, then Reject the Null Hypothesis
#B) If pval > alpha, then Fail to Reject the Null Hypothesis
#I fail to reject the null hypothesis of Beta_1 = 0
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Exercise 4-1:
# Test the null hypothesis for the slopes in Models 2, 3, and 4.
# Do you reject or fail to reject for each model?
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#A) If pval < alpha, then Reject the Null Hypothesis
#B) If pval > alpha, then Fail to Reject the Null Hypothesis
summary(m2) #Reject H0 p-value: 0.0141 compare to alpha=.05
summary(m3) #Fail to reject H0
summary(m4) #Reject H0
#———————————————————————————
# 5. Correlation and R-squared in R
#———————————————————————————
#We just learned how to do Linear regression in R
#using the lm() function.
#Linear regression told us how a 1 unit increase in X
#affects Y.
#Correlation coefficents (rho) are another way of
#representing how strong a linear relationship is.
#They range from -1 to 1, with values further away
#from zero representing a stronger association.
#Positive values indicate that as X increases,
#Y increases
#Negative values indicate that as X increases,
#Y decreases
#Below is the function for a correlation between
#two variables:
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# Correlation: cor(data$variable1, data$variable2)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Exercise 5-1:
# Use the correlation function to find the correlation
#between X and Y in our datset “a”
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
cor(a$X, a$Y)
#Because the correlation is positive we know that
# as X increases, Y increases.
#We also knew this before when we did linear regression
#and looked at the plots.
#The correlation coefficent is related to something
#from linear regression called R-squared.
#R-squared represents the proportion of variability
#in the outcome (Y) explained by the predictor (X).
#IF we think of our correlation coefficent as R,
#then R-squared will be:
cor(a$X, a$Y)^2
#This means that ~14.6% of the variability in Y
# is explained by the scores in X.
#Which is the same value reported in the linear regression
summary(m1)
#———————————————————————————
# 6. Outliers Revisited
#———————————————————————————
#For this part, we will need the Rallfun-v23.txt source file
#Import the data from lab8b.txt into R in table form; save as object called b.
b=read.table(‘lab8b.txt’,header=TRUE)
b #Contains 26 values, X variable and Y variable
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Exercise 6-1:
# A) Create a scatter plot (X on x-axis, Y on y-axis)
# B) Based on the scatter should the correlation
# be positive or negative?
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#A)
#B)
# Previously we visually identified outliers
# and used the identify() command to find their
# row numbers so we could eliminate them
# Instead, let’s use a more systematic approach
#using an outlier removal technique called
#the Mad-Median
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# Identify Outliers using Mad-Median: out(data$variable)
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^#
# For example, I can identify the outliers in X by doing the following.
out(b$X)
# n.out tells me how many outliers their are
# out.id tells me the rows they occur on.
#I could then create a new version of b that does not contain outliers in X
brmX=b[c(-19,-25), ]
#And then find the correaltion for this version
cor(brmX$Y, brmX$X)
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#Exercise 6-2:
# A) Create dataset brmY that is a new version of b but with outliers in Y removed (using Mad-Median)
# B) Create dataset brmXY that is a new version of b but with outliers in X OR Y removed (using Mad-Median).
# C) What is the correaltion coefficeint between X and Y for part A
# D) What is the correaltion coefficeint between X and Y for part B
#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#*#
#A)
out(b$Y)
brmY=b[c(-22,-26), ]
#B)
brmXY=b[c(-22,-26,-19,-25),]
#C)
cor(brmY$Y, brmY$X)
#D)
cor(brmXY$Y, brmXY$X)
#Now, if we look at all these correlation values
#removing these various outliers, what do we notice?
cor(b$Y, b$X)
cor(brmX$Y, brmX$X)
cor(brmY$Y, brmY$X)
cor(brmXY$Y, brmXY$X)
#And now what does our plot look like if we removed outliers in X or Y
plot(y=b$Y, x=b$X)
points(y=brmXY$Y, x=brmXY$X,col=”red”)
#Are there still outliers?
#??????????????????????????????????????????????????????????????#
#Thought Question 3: What does this tell us about
#our outlier detection technique?
#??????????????????????????????????????????????????????????????#