Professional Documents
Culture Documents
matches
is_null
If you needed to read a file using an absolute path, what would be the first symbol in your argument
(...) when using the read_csv function?
A. read_csv(">...")
B. read_csv(";...")
C. read_csv("...")
D. read_csv("/...")
In [7]: test_1.1()
[1] "Success!"
The file argument in the read_csv function that uses an absolute path can never look like that of
a relative path?
In [10]: # Make sure the correct answer is written in lower-case (true / false)
# Surround your answer with quotation marks.
# Replace the fail() with your answer.
In [11]: test_1.2()
[1] "Success!"
Question 1.3 Match the following paths with the correct path type that they represent:
{points: 1}
Example Path
A. /Users/my_user/Desktop/UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx
B. https://www.ubc.ca
C. file_1.csv
D. /Users/name/Documents/Course_A/homework/my_first_homework.docx
E. homework/my_second_homework.docx
F. https://www.random_website.com
Path Type
1. absolute
2. relative
3. URL
For every argument, create an object using the letter associated with the example path and assign
it the corresponding number from the list of path types. For example: B <- 1
In [14]: test_1.3()
1. test_1.3()
2. test_that("Solution is incorrect", {
. expect_equal(digest(A), "6717f2823d3202449301145073ab8719")
. expect_equal(digest(B), "e5b57f323c7b3719bbaaf9f96b260d39")
. expect_equal(digest(C), "db8e490a925a60e62212cefc7674ca02")
. expect_equal(digest(D), "6717f2823d3202449301145073ab8719")
. expect_equal(digest(E), "db8e490a925a60e62212cefc7674ca02")
. expect_equal(digest(F), "e5b57f323c7b3719bbaaf9f96b260d39")
. })
3. test_code(desc, code, env = parent.frame())
4. get_reporter()$end_test(context = get_reporter()$.context, test = tes
t)
5. stop("Test failed: '", test, "'\n", messages, call. = FALSE)
What would the relative path look like if the working directory (i.e., where the Jupyter notebook is
where you are running your R code from) is now located in the UBC folder?
A. sn_trial_1.xlsx
B. /SciaticNerveLab/sn_trial_1.xlsx
C. BIOL363/SciaticNerveLab/sn_trial_1.xlsx
D. UBC/BIOL363/SciaticNerveLab/sn_trial_1.xlsx
E. /BIOL363/SciaticNerveLab/sn_trial_1.xlsx
In [24]: test_1.4()
[1] "Success!"
Question 1.5
{points: 1}
Match the following paths with the most likely kind of data format they contain.
Paths:
1. https://www.ubc.ca/datasets/data.db
2. /home/user/downloads/data.xlsx
3. data.tsv
4. examples/data/data.csv
5. https://en.wikipedia.org/wiki/Normal_distribution
Dataset Types:
A. Excel Spreadsheet
B. Database
C. HTML file
For every dataset type, create an object using the letter associated with the example and assign it
the corresponding number from the list of paths. For example: F <- 5
In [26]: test_1.5()
[1] "Success!"
Not all data sets come as perfectly organized like the ones you worked with last week. Time and
effort were put into ensuring that the files were arranged with headers, columns were separated by
commas, and the beginning excluded metadata.
Now that you understand how to read files located outside (or inside) of your working directory, you
can begin to learn the tips and tricks necessary to overcoming the setbacks of read_csv .
In [4]: ### Run this cell to learn more about the arguments used in read_csv
### Reading over the help file will assist with the next question.
?read_csv
Question 2.1
{points: 1}
Match the following descriptions with the corresponding arguments used in read_csv :
Descriptions
H. Specifies whether or not the first row of data in your file are column labels. Also allows you to
create a vector that can be used to label columns.
J. Specifies the number of lines which must be ignored because they contain metadata.
Arguments
1. file
2. delim
3. col_names
4. skip
For every description, create an object using the letter associated with the description and assign it
the corresponding number from the list of functions. For example: G <- 1
In [29]: test_2.1()
[1] "Success!"
read_csv2 and read_delim can both be used for reading files that have columns separated
by ; .
Assign your answer to an object called answer2.2 . Make sure to write in all lower-case.
In [32]: # Make sure the correct answer is written in lower-case (true / false)
# Surround your answer with quotation marks.
# Replace the fail() with your answer.
In [33]: test_2.2()
[1] "Success!"
read_tsv can be used for files that have columns separated by which of the following:
A. letters
B. tabs
Read a delimited file (including csv & tsv) into a tibble
https://hub-prod-dsci.stat.ubc.ca/jupyter/user/96861/notebooks/dsci-100/materials/worksheet_02/.ipynb 7/36
7/11/2020 .ipynb - Jupyter Notebook
C. numbers
D. commas
In [35]: test_2.3()
[1] "Success!"
To clean up the file and make it easier to read, we only kept the country name, happiness score,
economy (GDP per capita), life expectancy, and freedom. The happiness scores and rankings use
data from the Gallup World Poll, which surveys citizens in countries from around the world.
Kaggle stores this information but it is compiled by the Sustainable Development Solutions
Network. They survey these factors nearly every year (since 2012) and allow global comparisons to
optimize political decision making. These landmark surveys are highly recognized and allow
countries to learn and grow from one another. One day, they will provide a historical insight on the
nature of our time.
A. Corruption
B. Government Intervention
C. Perception of Corruption
In [37]: test_3.1()
[1] "Success!"
In [39]: test_3.2()
[1] "Success!"
It is often a good idea to try to "inspect" your data to see what it looks like before trying to load it
into R. This will help you figure out the right function to call and what arguments to use. When your
data are stored as plain text, you can do this easily with Jupyter (or any text editor).
Open all the files named happiness_report... in the data folder in your working directory
(the worksheet_02 directory) using Jupyter (again, this video (https://www.youtube.com/watch?
v=6orO4YMAyeQ) shows you how to do this). This will allow you to visualize the files and the
organization of your data. Based on your findings, fill in the missing items in the table below. This
table will be very useful to refer back to in the coming weeks.
You'll notice that trying to open one of the files gives you an error of the form Error! ... is
not UTF-8 encoded . This means that this data is not stored as human-readable plain text. For
this special file, just fill in the read_* function entry, the other columns will be left blank.
In [42]:
happiness_report.csv , A no NA read_csv
happiness_report_semicolon.csv ; yes no NA B
happiness_report_no_header.csv , E no NA read_csv
happiness_report.xlsx F
For the missing items (labelled A to F) in the table above, create an object using the letter and
assign it the corresponding missing value.
For example: A <- "yes" . The possible options for each column are given in the first row of the
table.
In [10]: test_3.3()
[1] "Success!"
Question 3.4
{points: 1}
Read the file happiness_report.csv in the data folder using the shortest relative path. Hint:
preview the data using Jupyter (as shown in this video (https://www.youtube.com/watch?
v=6orO4YMAyeQ)) so you know which read_* function and arguments to use.
Assign the relative path (the string) to an object named happiness_report_path , and assign
the output of the correct read_* function you call to an object named happiness_report .
A tibble: 10 × 5
In [12]: test_3.4()
[1] "Success!"
If Norway is in "first place" based on the happiness score, at what position is Canada?
A. 3rd
B. 15th
C. 7th
D. 28th
In [13]: print(happiness_report)
# A tibble: 155 x 5
country happiness_score GDP_per_capita life_expectancy freedom
<chr> <dbl> <dbl> <dbl> <dbl>
1 Norway 7.54 1.62 0.797 0.635
2 Denmark 7.52 1.48 0.793 0.626
3 Iceland 7.50 1.48 0.834 0.627
4 Switzerland 7.49 1.56 0.858 0.620
5 Finland 7.47 1.44 0.809 0.618
6 Netherlands 7.38 1.50 0.811 0.585
7 Canada 7.32 1.48 0.835 0.611
8 New Zealand 7.31 1.41 0.817 0.614
9 Sweden 7.28 1.49 0.831 0.613
10 Australia 7.28 1.48 0.844 0.602
# … with 145 more rows
In [17]: test_3.5()
[1] "Success!"
Question 3.6.1
{points: 1}
For each question in the ranges 3.6.1 to 3.6.5 and 3.7.1 to 3.7.2, fill in the ... in the code given.
Replace fail() with your finished answer. Refer to your table above and don't be afraid to ask
for help. Remember you can use ? help operator to access documentation for a function (e.g. ?
read_csv ).
A tibble: 6 × 5
In [19]: test_3.6.1()
[1] "Success!"
Take a look at the data type in the GDP_per_capita , life_expectancy , and freedom
columns. It says <chr> ; that stands for "character" or text data -- not numeric as we would hope!
The happiness_score column has <dbl> (stands for "double-precision floating point
number", a numeric type), which is correct. We'd like the other columns to have this type as well...
what happened?
If we look closer, we'll see that the decimal point in this data was a comma , rather than a period
(common in some European countries).
Instead of read_delim , for this data we'll need another function that can handle commas as
decimal points.
Question 3.6.2
{points: 1}
Read in the file happiness_report_semicolon.csv again, but this time use a different
read_* function than read_delim to ensure that the column types are correct. Remember you
can use ? help operator to access documentation for a function (e.g. ?read_csv ). Hint: take a
look at the list of read_* functions at the top of this worksheet under the learning goals section.
Name the data frame happy_semi_df2 .
Using ',' as decimal and '.' as grouping mark. Use read_delim() for more
control.
Parsed with column specification:
cols(
country = col_character(),
happiness_score = col_double(),
GDP_per_capita = col_double(),
life_expectancy = col_double(),
freedom = col_double()
)
A tibble: 6 × 5
In [21]: test_3.6.2()
[1] "Success!"
Question 3.6.3
{points: 1}
Read in the file happiness_report.tsv using the appropriate read_* function and name it
happy_tsv .
A tibble: 6 × 5
In [23]: test_3.6.3()
[1] "Success!"
Question 3.6.4
{points: 1}
A tibble: 6 × 5
In [25]: test_3.6.4()
[1] "Success!"
Question 3.6.5
{points: 1}
A tibble: 6 × 5
In [27]: test_3.6.5()
[1] "Success!"
Question 3.7
{points: 1}
Earlier when you tried to open happiness_report.xlsx in Jupyter, you received an error of the
form Error! /... is not UTF-8 encoded . This happens because Excel spreadsheet files
are not stored in plain text, and so Jupyter can't open them with its default text viewing program.
This makes them a bit harder to inspect before trying to open in R .
To inspect the data, we will just try to load happiness_report.xlsx using the most basic form
of the appropriate read_* function, passing only the filename as an argument. Assign the output
to a variable called happy_xlsx .
Note: you can also try to examine .xlsx files with Microsoft Excel or Google Sheets before
loading into R.
A tibble: 6 × 5
In [29]: test_3.7()
[1] "Success!"
Question 3.8
{points: 1}
Opening the data on a text editor showed some clear differences. Do all the data sets look the
same once reading them on your R notebook ( "yes" or "no" )?
Assign your answer to an object called answer3.8 . Make sure to write in all lower-case.
In [30]: # Make sure the correct answer is written in lower-case (yes / no)
# Surround your answer with quotation marks.
# Replace the fail() with your answer.
In [31]: test_3.8()
[1] "Success!"
Question 3.9
{points: 1}
Using the happy_header data set that you read earlier, plot life_expectancy vs.
GDP_per_capita . Note that the statement "plot A vs. B" usually means to plot A on the y-axis,
and B on the x-axis. Be sure to use xlab and ylab to give your axes human-readable labels.
In [34]:
In [35]: test_3.9()
1. test_3.9()
year
month
day
day of the week (from 1 - 7.999..., with fractional days based on departure time)
origin airport code
destination airport code
flight distance (miles)
scheduled departure time (local)
departure delay (minutes)
scheduled arrival time (local)
arrival delay (minutes)
diverted? (True/False)
cancelled? (True/False)
We can use our dataset to figure out which airline company was the least likely to experience a
flight delay in 2015.
In [36]: # Make sure the correct answer is written in lower-case (true / false)
# Surround your answer with quotation marks.
# Replace the fail() with your answer.
In [37]: test_4.1()
[1] "Success!"
If we're mostly concerned with getting to our destination on time, which variable in our dataset
should we use as the y-axis of a plot?
A. flight distance
B. departure delay
D. arrival delay
Assign your answer as a single character to an object called answer4.2 . For example,
answer4.2 <- 'F'
In [39]: test_4.2()
[1] "Success!"
Let's start exploring our data. The file is stored in data/flights_filtered.db in your working
directory (still the worksheet_02 folder). If you try to open the file in Jupyter to inspect its
contents, you'll again run into the Error! ... is not UTF-8 encoded message you got
earlier when trying to open an Excel spreadsheet in Jupyter. This is because the file is a database
(often denoted by the .db extension), which are usually not stored in plain text.
ident, sql
Note: the tbl function returns a reference to a database table, not the actual data itself. This
allows R to talk to the database / get subsets of data without loading the entire thing into R!
The next few questions will walk you through this process.
Question 4.3.1
{points: 1}
Use the dbConnect function to open and connect to the flights_filtered.db database in
the data folder.
Note: we have provided the first argument, RSQLite::SQLite() , to dbConnect for you
below. This just tells the dbConnect function that we will be using an SQLite database.
In [43]: #conn <- dbConnect(RSQLite::SQLite(), '...') #replace ... with the databas
In [44]: test_4.3.1()
[1] "Success!"
Question 4.3.2
{points: 1}
Use the dbListTables function to inspect the database to see what tables it contains.
Make a new variable named flights_table_name that stores the name of the table with our
data in it
In [45]: # Use this cell to figure out how to answer the question
# Call the dbListTables function in this cell and take a look at the output
# If you don't know what argument to give dbListTables, use ?dbListTables t
#once you've called this and seen the output, insert the output string in t
dbListTables(conn)
'bos_flights'
In [47]: test_4.3.2()
[1] "Success!"
Question 4.3.3
{points: 1}
Use the tbl function to create an R reference to the table so that you can manipulate it with
dbplyr functions.
In [49]: test_4.3.3()
[1] "Success!"
Now that we've connected to the database and created an R table object, we'll take a look at the
first few rows and columns of the flight on-time performance data. Even though flight_data
isn't a regular R dataframe---it's a database table connection, or specifically a
tbl_SQLiteConnection ---the functions from the dbplyr package let us treat it like an R
dataframe!
So let's try using the head function and see what happens:
It works! And---as luck would have it---it also works to use the select and filter functions
you've learned about previously.
Note: not all functions that you're familiar with work on database table tbl reference objects. For
example, if you try to run nrow (to count the rows) or tail (to get the last rows of the table), you
won't get the result you expect.
Question 4.4
{points: 2}
Use the select and filter functions to extract the arrival and departure delay columns for
rows where the origin airport is BOS.
In [52]: test_4.4()
[1] "Success!"
In [53]: # Take a look at `delay_data` to make sure it has the two columns we expect
# Run this cell before continuing.
head(delay_data)
You'll notice in the Source: line that the dimension of the table is listed as [?? x 2] . This is
because databases do things in the laziest way possible. Since we only asked the database for its
head (the first few rows), it didn't bother going through all the rows to figure out how many there
are. This sort of laziness can help make things run a lot faster when dealing with large datasets.
Our next task is to visualize our data to see whether there is a difference in delays for arrivals at and
departures from BOS . But before we do that, let's figure out just how much data we're working
with using the count function.
Yikes---that's a lot of data! If we tried to do a scatter plot of these, we probably wouldn't be able to
see anything useful; all the points would be mushed together. Let's try using a histogram instead. A
histogram helps us visualize how a particular variable is distributed in a dataset. It does this by
separating the data into bins, and then plotting vertical bars showing how many data points fell in
each bin.
For example, we could use a histogram to visualize the distribution of waiting times between
eruptions of the Old Faithful geyser in Yellowstone National Park, Wyoming with the
geom_histogram layer. The bins argument specifies the number of bins to use in the
histogram.
We'll use histograms to visualize the departure delay times and arrival delay times separately.
Question 4.5
{points: 1}
Plot the arrival delay time data as a histogram. You will plot the delay (in hours) separated into 15-
minute-wide bins on the x axis. The y axis will show the percentage of flights departing BOS that
had that amount of delay during 2015.
You'll do this by finishing the code segment provided below. There are 4 places where ...
appears in the provided code below. Replace each instance of ... with the correct item from the
following list:
ARRIVAL_DELAY/60
'steelblue'
"Delay (hours)"
geom_histogram
In [61]:
binwidth=.25,fill="lightblue",color="steelblue")+scale_x_continuous(limits=c
Warning message:
“Removed 203 rows containing non-finite values (stat_bin).”Warning messag
e:
“Removed 2 rows containing missing values (geom_bar).”
In [62]: test_4.5()
[1] "Success!"
Question 4.6
{points: 1}
Plot the departure delay time data as a histogram with the same format as the previous plot. Hint:
copy and paste your code from the previous block! The only thing that will change is column from
delay_data that you use for the x-axis.
Warning message:
“Removed 201 rows containing non-finite values (stat_bin).”Warning messag
e:
“Removed 2 rows containing missing values (geom_bar).”
In [66]: test_4.6()
[1] "Success!"
Question 4.7
{points: 1}
Look at the two plots you generated. Are departures from or arrivals to BOS more likely to be on
time (at most 15 minutes ahead/behind schedule)?
In [68]: test_4.7()
[1] "Success!"
So far, we've done everything using the delay_data database reference object constructed
using functions from the dbplyr library. Remember: this isn't the data itself! If we want to save
the small data subset that we've constructed to our local machines (perhaps to share it on the web
or with collaborators), we'll need to take one last step.
Question 4.8.1
{points: 1}
We want to download the arrival / departure times data where the origin airport is BOS from the
database. We will use the collect function to do this, which of the following should you use?
A. collect(delay_data)
B. collect(flights_table_name)
C. collect(conn)
D. collect(flight_data)
Assign your answer as a single character to an object called answer4.8.1 . For example,
answer4.8.1 <- 'E'
In [70]: test_4.8.1()
[1] "Success!"
Question 4.8.2
{points: 1}
If you input the wrong argument in the collect() function below your worksheet will time
out. Please double check you have the correct answer to question 4.8.1 above and input the
correct argument in the collect() function below!
Use the collect function to download the arrival / departure times data where the origin airport
is BOS from the database and store it in a dataframe object called delay_dataframe . Then, use
the write_csv function to write the dataframe to a file called delay_data.csv . Save the file
in the data/ folder.
Note: there are many possible ways to use write_csv to customize the output. Just use the
defaults here!
Read a delimited file (including csv & tsv) into a tibble
https://hub-prod-dsci.stat.ubc.ca/jupyter/user/96861/notebooks/dsci-100/materials/worksheet_02/.ipynb 30/36
7/11/2020 .ipynb - Jupyter Notebook
In [ ]: #If you don't know how to call collect or write_csv, use this cell to
#check the documentation by calling ?collect or ?write_csv
In [72]: test_4.8.2()
[1] "Success!"
year
gwp_value
Specifically we will scrape the 2 columns named "Year" and "Real GWP" in the table under the
header "Historical and prehistorical estimates". The end goal of this exercise is to create a line
plot with year on the x-axis and GWP value on the y-axis.
Under which of the following headers in the table will we scrape from on the Wikipedia Gross world
product page (https://en.wikipedia.org/wiki/Gross_world_product)?
B. Recent growth
D. See also
In [ ]: test_5.1.0()
C. year
In [ ]: test_5.1.1()
We need to now load the rvest package to begin our web scraping!
Question 5.2
{points: 1}
Use read_html to download information from the URL given in the cell below.
In [ ]: test_5.2()
Question 5.3
Run the cell below to create the first column of your data set (the year from the table under the
"Historical and prehistorical estimates" header). The node was obtained using SelectorGadget .
In [ ]: # Run this cell to create the first column for your data set.
year <- html_text(html_nodes(gwp, ".wikitable tbody:nth-child(1) td:nth-chi
head(year)
We can see that although we want numbers for the year, the data we scraped includes the
characters AD and \n (a newline character). We will have to do some string manipulation and
then convert the years from characters to numbers.
First we use the str_replace_all function to match the string " AD\n" and replace it with
nothing "" :
When we print year, we can see we were able to remove " AD\n" , but we missed that there is
also " BC\n" on the earliest years! There are also commas ( "," ) in the large BC years that we
will have to remove. We also need to put a - sign in front of the BC numbers so we don't confuse
them with the AD numbers after we convert everything to numbers. To do this we will need to use a
similar strategy to clean this all up!
This week we will provide you the code to do this cleaning, next week you will learn to do these
kinds of things yourself. After we do all the string/text manipulation then we use the as.numeric
function to convert the text to numbers.
In [ ]: # Run this cell to clean up the year data and convert it to a number.
# Use grep to select the lines containing " BC\n" and put a - at the beginn
year[grepl(pattern = " BC\n", x = year)] <- str_replace_all(string = year[g
Question 5.4
{points: 1}
Create a new column for the gross world product (GWP) from the table we are scraping. Don't
forget to use SelectorGadget to obtain the CSS selector needed to scrape the GWP values
from the table we are scraping. Assign your answer to an object called gwp_value .
Fill in the ... in the cell below. Copy and paste your finished answer into the fail() .
In [ ]: test_5.4()
Again, looking at the output of head(gwp_value) we see we have some cleaning and type
conversions to do. We need to remove the commas, the extraneous trailing information in the first 3
columns, and the "\n" character again. We provide the code to do this below:
In [ ]: # Run this cell to clean up the year data and convert it to a number.
# Create a new variable called gwp_value_clean.
gwp_value_clean <- gwp_value
Question 5.5
{points: 1}
Use the tidyverse tibble function to create a data frame named gwp with year and
gwp_value as columns. The general form for the creating data frames from vectors/lists using
the tibble function is as follows:
In [ ]: test_5.5()
One last piece of data transformation/wrangling we will do before we get to data visualization is to
create another column called sqrt_year which scales the year values so that they will be more
informative when we plot them (if you look at our year data we have a lot of years in the recent
past, and fewer and fewer as we go back in time). Often times you can just transform the scale
within ggplot (for example see what we do with the gwp_value later on), but the year value is
tricky for scaling because it contains negative values. So we need to first make everything positive,
then take the square root, and then re-transform the values that should be negative to negative
again! We provide the code to do this below.
Question 5.6
{points: 1}
Create a line plot using the gwp data frame where sqrt_year is on the x-axis and
gwp_value is on the y-axis. We provide the plot code to relabel the x-axis with the human
understandable years instead of the tranformed ones we plot. Name your plot object
gwp_historical . To make a line plot instead of a scatter plot you should use the
geom_line() function instead of the geom_point() function.
options(repr.plot.width=8, repr.plot.height=3)
# your code here
fail() # No Answer - remove if you provide an answer
gwp_historical
In [ ]: test_5.6()
Question 5.7
{points: 1}
Looking at the line plot, when does the Gross World Domestic Product first start to more rapidly
increase (i.e., when does the slope of the line first change)?
In [ ]: test_5.7()