forked from hadley/r4ds
-
Notifications
You must be signed in to change notification settings - Fork 16
/
databases.qmd
666 lines (498 loc) · 27.9 KB
/
databases.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
# Databases {#sec-import-databases}
```{r}
#| echo: false
source("_common.R")
```
## Introduction
A huge amount of data lives in databases, so it's essential that you know how to access it.
Sometimes you can ask someone to download a snapshot into a `.csv` for you, but this gets painful quickly: every time you need to make a change you'll have to communicate with another human.
You want to be able to reach into the database directly to get the data you need, when you need it.
In this chapter, you'll first learn the basics of the DBI package: how to use it to connect to a database and then retrieve data with a SQL[^databases-1] query.
**SQL**, short for **s**tructured **q**uery **l**anguage, is the lingua franca of databases, and is an important language for all data scientists to learn.
That said, we're not going to start with SQL, but instead we'll teach you dbplyr, which can translate your dplyr code to the SQL.
We'll use that as a way to teach you some of the most important features of SQL.
You won't become a SQL master by the end of the chapter, but you will be able to identify the most important components and understand what they do.
[^databases-1]: SQL is either pronounced "s"-"q"-"l" or "sequel".
### Prerequisites
In this chapter, we'll introduce DBI and dbplyr.
DBI is a low-level interface that connects to databases and executes SQL; dbplyr is a high-level interface that translates your dplyr code to SQL queries then executes them with DBI.
```{r}
#| label: setup
#| message: false
library(DBI)
library(dbplyr)
library(tidyverse)
```
## Database basics
At the simplest level, you can think about a database as a collection of data frames, called **tables** in database terminology.
Like a data frame, a database table is a collection of named columns, where every value in the column is the same type.
There are three high level differences between data frames and database tables:
- Database tables are stored on disk and can be arbitrarily large.
Data frames are stored in memory, and are fundamentally limited (although that limit is still plenty large for many problems).
- Database tables almost always have indexes.
Much like the index of a book, a database index makes it possible to quickly find rows of interest without having to look at every single row.
Data frames and tibbles don't have indexes, but data.tables do, which is one of the reasons that they're so fast.
- Most classical databases are optimized for rapidly collecting data, not analyzing existing data.
These databases are called **row-oriented** because the data is stored row-by-row, rather than column-by-column like R.
More recently, there's been much development of **column-oriented** databases that make analyzing the existing data much faster.
Databases are run by database management systems (**DBMS**'s for short), which come in three basic forms:
- **Client-server** DBMS's run on a powerful central server, which you connect from your computer (the client). They are great for sharing data with multiple people in an organization. Popular client-server DBMS's include PostgreSQL, MariaDB, SQL Server, and Oracle.
- **Cloud** DBMS's, like Snowflake, Amazon's RedShift, and Google's BigQuery, are similar to client server DBMS's, but they run in the cloud. This means that they can easily handle extremely large datasets and can automatically provide more compute resources as needed.
- **In-process** DBMS's, like SQLite or duckdb, run entirely on your computer. They're great for working with large datasets where you're the primary user.
## Connecting to a database
To connect to the database from R, you'll use a pair of packages:
- You'll always use DBI (**d**ata**b**ase **i**nterface) because it provides a set of generic functions that connect to the database, upload data, run SQL queries, etc.
- You'll also use a package tailored for the DBMS you're connecting to.
This package translates the generic DBI commands into the specifics needed for a given DBMS.
There's usually one package for each DBMS, e.g.
RPostgres for PostgreSQL and RMariaDB for MySQL.
If you can't find a specific package for your DBMS, you can usually use the odbc package instead.
This uses the ODBC protocol supported by many DBMS.
odbc requires a little more setup because you'll also need to install an ODBC driver and tell the odbc package where to find it.
Concretely, you create a database connection using `DBI::dbConnect()`.
The first argument selects the DBMS[^databases-2], then the second and subsequent arguments describe how to connect to it (i.e. where it lives and the credentials that you need to access it).
The following code shows a couple of typical examples:
[^databases-2]: Typically, this is the only function you'll use from the client package, so we recommend using `::` to pull out that one function, rather than loading the complete package with `library()`.
```{r}
#| eval: false
con <- DBI::dbConnect(
RMariaDB::MariaDB(),
username = "foo"
)
con <- DBI::dbConnect(
RPostgres::Postgres(),
hostname = "databases.mycompany.com",
port = 1234
)
```
The precise details of the connection vary a lot from DBMS to DBMS so unfortunately we can't cover all the details here.
This means you'll need to do a little research on your own.
Typically you can ask the other data scientists in your team or talk to your DBA (**d**ata**b**ase **a**dministrator).
The initial setup will often take a little fiddling (and maybe some googling) to get it right, but you'll generally only need to do it once.
### In this book
Setting up a client-server or cloud DBMS would be a pain for this book, so we'll instead use an in-process DBMS that lives entirely in an R package: duckdb.
Thanks to the magic of DBI, the only difference between using duckdb and any other DBMS is how you'll connect to the database.
This makes it great to teach with because you can easily run this code as well as easily take what you learn and apply it elsewhere.
Connecting to duckdb is particularly simple because the defaults create a temporary database that is deleted when you quit R.
That's great for learning because it guarantees that you'll start from a clean slate every time you restart R:
```{r}
con <- DBI::dbConnect(duckdb::duckdb())
```
duckdb is a high-performance database that's designed very much for the needs of a data scientist.
We use it here because it's very easy to get started with, but it's also capable of handling gigabytes of data with great speed.
If you want to use duckdb for a real data analysis project, you'll also need to supply the `dbdir` argument to make a persistent database and tell duckdb where to save it.
Assuming you're using a project (@sec-workflow-scripts-projects), it's reasonable to store it in the `duckdb` directory of the current project:
```{r}
#| eval: false
con <- DBI::dbConnect(duckdb::duckdb(), dbdir = "duckdb")
```
### Load some data {#sec-load-data}
Since this is a new database, we need to start by adding some data.
Here we'll add `mpg` and `diamonds` datasets from ggplot2 using `DBI::dbWriteTable()`.
The simplest usage of `dbWriteTable()` needs three arguments: a database connection, the name of the table to create in the database, and a data frame of data.
```{r}
dbWriteTable(con, "mpg", ggplot2::mpg)
dbWriteTable(con, "diamonds", ggplot2::diamonds)
```
If you're using duckdb in a real project, we highly recommend learning about `duckdb_read_csv()` and `duckdb_register_arrow()`.
These give you powerful and performant ways to quickly load data directly into duckdb, without having to first load it into R.
We'll also show off a useful technique for loading multiple files into a database in @sec-save-database.
### DBI basics
You can check that the data is loaded correctly by using a couple of other DBI functions: `dbListTable()` lists all tables in the database[^databases-3] and `dbReadTable()` retrieves the contents of a table.
[^databases-3]: At least, all the tables that you have permission to see.
```{r}
dbListTables(con)
con |>
dbReadTable("diamonds") |>
as_tibble()
```
`dbReadTable()` returns a `data.frame` so we use `as_tibble()` to convert it into a tibble so that it prints nicely.
If you already know SQL, you can use `dbGetQuery()` to get the results of running a query on the database:
```{r}
sql <- "
SELECT carat, cut, clarity, color, price
FROM diamonds
WHERE price > 15000
"
as_tibble(dbGetQuery(con, sql))
```
If you've never seen SQL before, don't worry!
You'll learn more about it shortly.
But if you read it carefully, you might guess that it selects five columns of the diamonds dataset and all the rows where `price` is greater than 15,000.
## dbplyr basics
Now that we've connected to a database and loaded up some data, we can start to learn about dbplyr.
dbplyr is a dplyr **backend**, which means that you keep writing dplyr code but the backend executes it differently.
In this, dbplyr translates to SQL; other backends include [dtplyr](https://dtplyr.tidyverse.org) which translates to [data.table](https://r-datatable.com), and [multidplyr](https://multidplyr.tidyverse.org) which executes your code on multiple cores.
To use dbplyr, you must first use `tbl()` to create an object that represents a database table:
```{r}
diamonds_db <- tbl(con, "diamonds")
diamonds_db
```
::: callout-note
There are two other common ways to interact with a database.
First, many corporate databases are very large so you need some hierarchy to keep all the tables organized.
In that case you might need to supply a schema, or a catalog and a schema, in order to pick the table you're interested in:
```{r}
#| eval: false
diamonds_db <- tbl(con, in_schema("sales", "diamonds"))
diamonds_db <- tbl(con, in_catalog("north_america", "sales", "diamonds"))
```
Other times you might want to use your own SQL query as a starting point:
```{r}
#| eval: false
diamonds_db <- tbl(con, sql("SELECT * FROM diamonds"))
```
:::
This object is **lazy**; when you use dplyr verbs on it, dplyr doesn't do any work: it just records the sequence of operations that you want to perform and only performs them when needed.
For example, take the following pipeline:
```{r}
big_diamonds_db <- diamonds_db |>
filter(price > 15000) |>
select(carat:clarity, price)
big_diamonds_db
```
You can tell this object represents a database query because it prints the DBMS name at the top, and while it tells you the number of columns, it typically doesn't know the number of rows.
This is because finding the total number of rows usually requires executing the complete query, something we're trying to avoid.
You can see the SQL code generated by the dplyr function `show_query()`.
If you know dplyr, this is a great way to learn SQL!
Write some dplyr code, get dbplyr to translate it to SQL, and then try to figure out how the two languages match up.
```{r}
big_diamonds_db |>
show_query()
```
To get all the data back into R, you call `collect()`.
Behind the scenes, this generates the SQL, calls `dbGetQuery()` to get the data, then turns the result into a tibble:
```{r}
big_diamonds <- big_diamonds_db |>
collect()
big_diamonds
```
Typically, you'll use dbplyr to select the data you want from the database, performing basic filtering and aggregation using the translations described below.
Then, once you're ready to analyse the data with functions that are unique to R, you'll `collect()` the data to get an in-memory tibble, and continue your work with pure R code.
## SQL
The rest of the chapter will teach you a little SQL through the lens of dbplyr.
It's a rather non-traditional introduction to SQL but we hope it will get you quickly up to speed with the basics.
Luckily, if you understand dplyr you're in a great place to quickly pick up SQL because so many of the concepts are the same.
We'll explore the relationship between dplyr and SQL using a couple of old friends from the nycflights13 package: `flights` and `planes`.
These datasets are easy to get into our learning database because dbplyr comes with a function that copies the tables from nycflights13 to our database:
```{r}
dbplyr::copy_nycflights13(con)
flights <- tbl(con, "flights")
planes <- tbl(con, "planes")
```
```{r}
#| echo: false
options(dplyr.strict_sql = TRUE)
```
### SQL basics
The top-level components of SQL are called **statements**.
Common statements include `CREATE` for defining new tables, `INSERT` for adding data, and `SELECT` for retrieving data.
We will focus on `SELECT` statements, also called **queries**, because they are almost exclusively what you'll use as a data scientist.
A query is made up of **clauses**.
There are five important clauses: `SELECT`, `FROM`, `WHERE`, `ORDER BY`, and `GROUP BY`. Every query must have the `SELECT`[^databases-4] and `FROM`[^databases-5] clauses and the simplest query is `SELECT * FROM table`, which selects all columns from the specified table
. This is what dbplyr generates for an unadulterated table
:
[^databases-4]: Confusingly, depending on the context, `SELECT` is either a statement or a clause.
To avoid this confusion, we'll generally use `SELECT` query instead of `SELECT` statement.
[^databases-5]: Ok, technically, only the `SELECT` is required, since you can write queries like `SELECT 1+1` to perform basic calculations.
But if you want to work with data (as you always do!) you'll also need a `FROM` clause.
```{r}
flights |> show_query()
planes |> show_query()
```
`WHERE` and `ORDER BY` control which rows are included and how they are ordered:
```{r}
flights |>
filter(dest == "IAH") |>
arrange(dep_delay) |>
show_query()
```
`GROUP BY` converts the query to a summary, causing aggregation to happen:
```{r}
flights |>
group_by(dest) |>
summarize(dep_delay = mean(dep_delay, na.rm = TRUE)) |>
show_query()
```
There are two important differences between dplyr verbs and SELECT clauses:
- In SQL, case doesn't matter: you can write `select`, `SELECT`, or even `SeLeCt`. In this book we'll stick with the common convention of writing SQL keywords in uppercase to distinguish them from table or variables names.
- In SQL, order matters: you must always write the clauses in the order `SELECT`, `FROM`, `WHERE`, `GROUP BY`, `ORDER BY`. Confusingly, this order doesn't match how the clauses actually evaluated which is first `FROM`, then `WHERE`, `GROUP BY`, `SELECT`, and `ORDER BY`.
The following sections explore each clause in more detail.
::: callout-note
Note that while SQL is a standard, it is extremely complex and no database follows it exactly.
While the main components that we'll focus on in this book are very similar between DBMS's, there are many minor variations.
Fortunately, dbplyr is designed to handle this problem and generates different translations for different databases.
It's not perfect, but it's continually improving, and if you hit a problem you can file an issue [on GitHub](https://github.com/tidyverse/dbplyr/issues/) to help us do better.
:::
### SELECT
The `SELECT` clause is the workhorse of queries and performs the same job as `select()`, `mutate()`, `rename()`, `relocate()`, and, as you'll learn in the next section, `summarize()`.
`select()`, `rename()`, and `relocate()` have very direct translations to `SELECT` as they just affect where a column appears (if at all) along with its name:
```{r}
planes |>
select(tailnum, type, manufacturer, model, year) |>
show_query()
planes |>
select(tailnum, type, manufacturer, model, year) |>
rename(year_built = year) |>
show_query()
planes |>
select(tailnum, type, manufacturer, model, year) |>
relocate(manufacturer, model, .before = type) |>
show_query()
```
This example also shows you how SQL does renaming.
In SQL terminology renaming is called **aliasing** and is done with `AS`.
Note that unlike `mutate()`, the old name is on the left and the new name is on the right.
::: callout-note
In the examples above note that `"year"` and `"type"` are wrapped in double quotes.
That's because these are **reserved words** in duckdb, so dbplyr quotes them to avoid any potential confusion between column/table names and SQL operators.
When working with other databases you're likely to see every variable name quotes because only a handful of client packages, like duckdb, know what all the reserved words are, so they quote everything to be safe.
``` sql
SELECT "tailnum", "type", "manufacturer", "model", "year"
FROM "planes"
```
Some other database systems use backticks instead of quotes:
``` sql
SELECT `tailnum`, `type`, `manufacturer`, `model`, `year`
FROM `planes`
```
:::
The translations for `mutate()` are similarly straightforward: each variable becomes a new expression in `SELECT`:
```{r}
flights |>
mutate(
speed = distance / (air_time / 60)
) |>
show_query()
```
We'll come back to the translation of individual components (like `/`) in @sec-sql-expressions.
### FROM
The `FROM` clause defines the data source.
It's going to be rather uninteresting for a little while, because we're just using single tables.
You'll see more complex examples once we hit the join functions.
### GROUP BY
`group_by()` is translated to the `GROUP BY`[^databases-6] clause and `summarize()` is translated to the `SELECT` clause:
[^databases-6]: This is no coincidence: the dplyr function name was inspired by the SQL clause.
```{r}
diamonds_db |>
group_by(cut) |>
summarize(
n = n(),
avg_price = mean(price, na.rm = TRUE)
) |>
show_query()
```
We'll come back to what's happening with translation `n()` and `mean()` in @sec-sql-expressions.
### WHERE
`filter()` is translated to the `WHERE` clause:
```{r}
flights |>
filter(dest == "IAH" | dest == "HOU") |>
show_query()
flights |>
filter(arr_delay > 0 & arr_delay < 20) |>
show_query()
```
There are a few important details to note here:
- `|` becomes `OR` and `&` becomes `AND`.
- SQL uses `=` for comparison, not `==`. SQL doesn't have assignment, so there's no potential for confusion there.
- SQL uses only `''` for strings, not `""`. In SQL, `""` is used to identify variables, like R's ``` `` ```.
Another useful SQL operator is `IN`, which is very close to R's `%in%`:
```{r}
flights |>
filter(dest %in% c("IAH", "HOU")) |>
show_query()
```
SQL uses `NULL` instead of `NA`.
`NULL`s behave similarly to `NA`s.
The main difference is that while they're "infectious" in comparisons and arithmetic, they are silently dropped when summarizing.
dbplyr will remind you about this behavior the first time you hit it:
```{r}
flights |>
group_by(dest) |>
summarize(delay = mean(arr_delay))
```
If you want to learn more about how `NULL`s work, you might enjoy "[*Three valued logic*](https://modern-sql.com/concept/three-valued-logic)" by Markus Winand.
In general, you can work with `NULL`s using the functions you'd use for `NA`s in R:
```{r}
flights |>
filter(!is.na(dep_delay)) |>
show_query()
```
This SQL query illustrates one of the drawbacks of dbplyr: while the SQL is correct, it isn't as simple as you might write by hand.
In this case, you could drop the parentheses and use a special operator that's easier to read:
``` sql
WHERE "dep_delay" IS NOT NULL
```
Note that if you `filter()` a variable that you created using a summarize, dbplyr will generate a `HAVING` clause, rather than a `WHERE` clause.
This is a one of the idiosyncrasies of SQL: `WHERE` is evaluated before `SELECT` and `GROUP BY`, so SQL needs another clause that's evaluated afterwards.
```{r}
diamonds_db |>
group_by(cut) |>
summarize(n = n()) |>
filter(n > 100) |>
show_query()
```
### ORDER BY
Ordering rows involves a straightforward translation from `arrange()` to the `ORDER BY` clause:
```{r}
flights |>
arrange(year, month, day, desc(dep_delay)) |>
show_query()
```
Notice how `desc()` is translated to `DESC`: this is one of the many dplyr functions whose name was directly inspired by SQL.
### Subqueries
Sometimes it's not possible to translate a dplyr pipeline into a single `SELECT` statement and you need to use a subquery.
A **subquery** is just a query used as a data source in the `FROM` clause, instead of the usual table.
dbplyr typically uses subqueries to work around limitations of SQL.
For example, expressions in the `SELECT` clause can't refer to columns that were just created.
That means that the following (silly) dplyr pipeline needs to happen in two steps: the first (inner) query computes `year1` and then the second (outer) query can compute `year2`.
```{r}
flights |>
mutate(
year1 = year + 1,
year2 = year1 + 1
) |>
show_query()
```
You'll also see this if you attempted to `filter()` a variable that you just created.
Remember, even though `WHERE` is written after `SELECT`, it's evaluated before it, so we need a subquery in this (silly) example:
```{r}
flights |>
mutate(year1 = year + 1) |>
filter(year1 == 2014) |>
show_query()
```
Sometimes dbplyr will create a subquery where it's not needed because it doesn't yet know how to optimize that translation.
As dbplyr improves over time, these cases will get rarer but will probably never go away.
### Joins
If you're familiar with dplyr's joins, SQL joins are very similar.
Here's a simple example:
```{r}
flights |>
left_join(planes |> rename(year_built = year), by = "tailnum") |>
show_query()
```
The main thing to notice here is the syntax: SQL joins use sub-clauses of the `FROM` clause to bring in additional tables, using `ON` to define how the tables are related.
dplyr's names for these functions are so closely connected to SQL that you can easily guess the equivalent SQL for `inner_join()`, `right_join()`, and `full_join()`:
``` sql
SELECT flights.*, "type", manufacturer, model, engines, seats, speed
FROM flights
INNER JOIN planes ON (flights.tailnum = planes.tailnum)
SELECT flights.*, "type", manufacturer, model, engines, seats, speed
FROM flights
RIGHT JOIN planes ON (flights.tailnum = planes.tailnum)
SELECT flights.*, "type", manufacturer, model, engines, seats, speed
FROM flights
FULL JOIN planes ON (flights.tailnum = planes.tailnum)
```
You're likely to need many joins when working with data from a database.
That's because database tables are often stored in a highly normalized form, where each "fact" is stored in a single place and to keep a complete dataset for analysis you need to navigate a complex network of tables connected by primary and foreign keys.
If you hit this scenario, the [dm package](https://cynkra.github.io/dm/), by Tobias Schieferdecker, Kirill Müller, and Darko Bergant, is a life saver.
It can automatically determine the connections between tables using the constraints that DBAs often supply, visualize the connections so you can see what's going on, and generate the joins you need to connect one table to another.
### Other verbs
dbplyr also translates other verbs like `distinct()`, `slice_*()`, and `intersect()`, and a growing selection of tidyr functions like `pivot_longer()` and `pivot_wider()`.
The easiest way to see the full set of what's currently available is to visit the dbplyr website: <https://dbplyr.tidyverse.org/reference/>.
### Exercises
1. What is `distinct()` translated to?
How about `head()`?
2. Explain what each of the following SQL queries do and try recreate them using dbplyr.
``` sql
SELECT *
FROM flights
WHERE dep_delay < arr_delay
SELECT *, distance / (airtime / 60) AS speed
FROM flights
```
## Function translations {#sec-sql-expressions}
So far we've focused on the big picture of how dplyr verbs are translated to the clauses of a query.
Now we're going to zoom in a little and talk about the translation of the R functions that work with individual columns, e.g., what happens when you use `mean(x)` in a `summarize()`?
To help see what's going on, we'll use a couple of little helper functions that run a `summarize()` or `mutate()` and show the generated SQL.
That will make it a little easier to explore a few variations and see how summaries and transformations can differ.
```{r}
summarize_query <- function(df, ...) {
df |>
summarize(...) |>
show_query()
}
mutate_query <- function(df, ...) {
df |>
mutate(..., .keep = "none") |>
show_query()
}
```
Let's dive in with some summaries!
Looking at the code below you'll notice that some summary functions, like `mean()`, have a relatively simple translation while others, like `median()`, are much more complex.
The complexity is typically higher for operations that are common in statistics but less common in databases.
```{r}
flights |>
group_by(year, month, day) |>
summarize_query(
mean = mean(arr_delay, na.rm = TRUE),
median = median(arr_delay, na.rm = TRUE)
)
```
The translation of summary functions becomes more complicated when you use them inside a `mutate()` because they have to turn into so-called **window** functions.
In SQL, you turn an ordinary aggregation function into a window function by adding `OVER` after it:
```{r}
flights |>
group_by(year, month, day) |>
mutate_query(
mean = mean(arr_delay, na.rm = TRUE),
)
```
In SQL, the `GROUP BY` clause is used exclusively for summaries so here you can see that the grouping has moved from the `PARTITION BY` argument to `OVER`.
Window functions include all functions that look forward or backwards, like `lead()` and `lag()` which look at the "previous" or "next" value respectively:
```{r}
flights |>
group_by(dest) |>
arrange(time_hour) |>
mutate_query(
lead = lead(arr_delay),
lag = lag(arr_delay)
)
```
Here it's important to `arrange()` the data, because SQL tables have no intrinsic order.
In fact, if you don't use `arrange()` you might get the rows back in a different order every time!
Notice for window functions, the ordering information is repeated: the `ORDER BY` clause of the main query doesn't automatically apply to window functions.
Another important SQL function is `CASE WHEN`. It's used as the translation of `if_else()` and `case_when()`, the dplyr function that it directly inspired.
Here are a couple of simple examples:
```{r}
flights |>
mutate_query(
description = if_else(arr_delay > 0, "delayed", "on-time")
)
flights |>
mutate_query(
description =
case_when(
arr_delay < -5 ~ "early",
arr_delay < 5 ~ "on-time",
arr_delay >= 5 ~ "late"
)
)
```
`CASE WHEN` is also used for some other functions that don't have a direct translation from R to SQL.
A good example of this is `cut()`:
```{r}
flights |>
mutate_query(
description = cut(
arr_delay,
breaks = c(-Inf, -5, 5, Inf),
labels = c("early", "on-time", "late")
)
)
```
dbplyr also translates common string and date-time manipulation functions, which you can learn about in `vignette("translation-function", package = "dbplyr")`.
dbplyr's translations are certainly not perfect, and there are many R functions that aren't translated yet, but dbplyr does a surprisingly good job covering the functions that you'll use most of the time.
## Summary
In this chapter you learned how to access data from databases.
We focused on dbplyr, a dplyr "backend" that allows you to write the dplyr code you're familiar with, and have it be automatically translated to SQL.
We used that translation to teach you a little SQL; it's important to learn some SQL because it's *the* most commonly used language for working with data and knowing some will make it easier for you to communicate with other data folks who don't use R.
If you've finished this chapter and would like to learn more about SQL.
We have two recommendations:
- [*SQL for Data Scientists*](https://sqlfordatascientists.com) by Renée M. P. Teate is an introduction to SQL designed specifically for the needs of data scientists, and includes examples of the sort of highly interconnected data you're likely to encounter in real organizations.
- [*Practical SQL*](https://www.practicalsql.com) by Anthony DeBarros is written from the perspective of a data journalist (a data scientist specialized in telling compelling stories) and goes into more detail about getting your data into a database and running your own DBMS.
In the next chapter, we'll learn about another dplyr backend for working with large data: arrow.
Arrow is designed for working with large files on disk, and is a natural complement to databases.