Overview
Use groupbyto create a 'grouped' copy of a table. Dplyr functions will manipulate each 'group' separately and then combine the results. Mtcars%% groupby(cyl)%% summarise(avg = mean(mpg)) These apply summary functionsto columns to create a new table. Summary functions take vectors as input and return one value (see back). . dplyr verb. Direct Spark SQL (DBI). SDF function (Scala API). Export an R DataFrame. Read a file. Read existing Hive table Data Science in Spark with Sparklyr:: CHEAT SHEET Intro Using sparklyr. Tidyverse Cheat Sheet For Beginners. This tidyverse cheat sheet will guide you through the basics of the tidyverse, and 2 of its core packages: dplyr and ggplot2! The tidyverse is a powerful collection of R packages that you can use for data science. They are designed.
dplyr isan R package for working with structured data both in and outside of R.dplyr makes data manipulation for R users easy, consistent, andperformant. With dplyr as an interface to manipulating Spark DataFrames,you can:
- Select, filter, and aggregate data
- Use window functions (e.g. for sampling)
- Perform joins on DataFrames
- Collect data from Spark into R
Statements in dplyr can be chained together using pipes defined by themagrittrR package. dplyr also supports non-standardevalutionof its arguments. For more information on dplyr, see theintroduction,a guide for connecting todatabases,and a variety ofvignettes.
Reading Data
You can read data into Spark DataFrames using the followingfunctions:
| Function | Description |
|---|---|
spark_read_csv | Reads a CSV file and provides a data source compatible with dplyr |
spark_read_json | Reads a JSON file and provides a data source compatible with dplyr |
spark_read_parquet | Reads a parquet file and provides a data source compatible with dplyr |
Regardless of the format of your data, Spark supports reading data froma variety of different data sources. These include data stored on HDFS(hdfs:// protocol), Amazon S3 (s3n:// protocol), or local filesavailable to the Spark worker nodes (file:// protocol)
Each of these functions returns a reference to a Spark DataFrame whichcan be used as a dplyr table (tbl).
Flights Data
This guide will demonstrate some of the basic data manipulation verbs ofdplyr by using data from the nycflights13 R package. This packagecontains data for all 336,776 flights departing New York City in 2013.It also includes useful metadata on airlines, airports, weather, andplanes. The data comes from the US Bureau of TransportationStatistics,and is documented in ?nycflights13
Connect to the cluster and copy the flights data using the copy_tofunction. Caveat: The flight data in nycflights13 is convenient fordplyr demonstrations because it is small, but in practice large datashould rarely be copied directly from R objects.
dplyr Verbs
Verbs are dplyr commands for manipulating data. When connected to aSpark DataFrame, dplyr translates the commands into Spark SQLstatements. Remote data sources use exactly the same five verbs as localdata sources. Here are the five verbs with their corresponding SQLcommands:
select~SELECTfilter~WHEREarrange~ORDERsummarise~aggregators: sum, min, sd, etc.mutate~operators: +, *, log, etc.
Laziness
When working with databases, dplyr tries to be as lazy as possible:
It never pulls data into R unless you explicitly ask for it.
It delays doing any work until the last possible moment: it collectstogether everything you want to do and then sends it to the databasein one step.
For example, take the followingcode:
This sequence of operations never actually touches the database. It’snot until you ask for the data (e.g. by printing c4) that dplyrrequests the results from the database.
Piping
You can usemagrittrpipes to write cleaner syntax. Using the same example from above, youcan write a much cleaner version like this:
Grouping
The group_by function corresponds to the GROUP BY statement in SQL.
Collecting to R
You can copy data from Spark into R’s memory by using collect().
collect() executes the Spark query and returns the results to R forfurther analysis and visualization.
SQL Translation
It’s relatively straightforward to translate R code to SQL (or indeed toany programming language) when doing simple mathematical operations ofthe form you normally use when filtering, mutating and summarizing.dplyr knows how to convert the following R functions to Spark SQL:
Window Functions
dplyr supports Spark SQL window functions. Window functions are used inconjunction with mutate and filter to solve a wide range of problems.You can compare the dplyr syntax to the query it has generated by usingdbplyr::sql_render().
Peforming Joins
It’s rare that a data analysis involves only a single table of data. Inpractice, you’ll normally have many tables that contribute to ananalysis, and you need flexible tools to combine them. In dplyr, thereare three families of verbs that work with two tables at a time:
Mutating joins, which add new variables to one table from matchingrows in another.
Filtering joins, which filter observations from one table based onwhether or not they match an observation in the other table.
Set operations, which combine the observations in the data sets asif they were set elements.
All two-table verbs work similarly. The first two arguments are x andy, and provide the tables to combine. The output is always a new tablewith the same type as x.
The following statements are equivalent:
Sampling
You can use sample_n() and sample_frac() to take a random sample ofrows: use sample_n() for a fixed number and sample_frac() for afixed fraction.
Writing Data
It is often useful to save the results of your analysis or the tablesthat you have generated on your Spark cluster into persistent storage.The best option in many scenarios is to write the table out to aParquet file using thespark_write_parquetfunction. For example:
This will write the Spark DataFrame referenced by the tbl R variable tothe given HDFS path. You can use thespark_read_parquetfunction to read the same table back into a subsequent Sparksession:
You can also write data as CSV or JSON using thespark_write_csv andspark_write_jsonfunctions.
Hive Functions
Many of Hive’s built-in functions (UDF) and built-in aggregate functions(UDAF) can be called inside dplyr’s mutate and summarize. The LanguangeReferenceUDFpage provides the list of available functions.
The following example uses the datediff and current_date HiveUDFs to figure the difference between the flight_date and the currentsystem date:
8 min read2020/05/04Motivation
I use R to extract data held in Microsoft SQL Server databases on a daily basis.
When I first started I was confused by all the different ways to accomplish this task. I was a bit overwhelmed trying to choose the, “best,” option given the specific job at hand.
I want to share what approaches I’ve landed on to help others who may want a simple list of options to get started with.
Scope
This post is about reading data from a database, not writing to one.
I prefer to use packages in the tidyverse so I’ll focus on those packages.
While it’s possible to generalize many of the concepts I write about here to other DBMS systems I will focus exclusively on Microsoft SQL Server. I hope this will provide simple, prescriptive guidance for those working in a similar configuration.
The data for these examples is stored using Microsoft SQL Server Express. Free download available here.
One last thing - these are a few options I populated my toolbox with. They have served me well over the past two years as an analyst in an enterprise environment, but are definitely not the only options available.
Setup
Anti Join Dplyr
Connect to the server
I use the keyring package to keep my credentials out of my R code. You can use the great documentation available from RStudio to learn how do the same.
Write some sample data
Note that I set the temporary argument to TRUE so that the data is written to the tempdb on SQL server, which will result in it being deleted on disconnection.

This results in dplyr prefixing the table name with, “##.”
SOURCE: https://db.rstudio.com/dplyr/#connecting-to-the-database
Option 1: Use dplyr syntax and let dbplyr handle the rest
Dplyr Join By 2 Variables
When I use this option
This is my default option.
I do almost all of my analysis in R and this avoids fragmenting my work and thoughts across different tools.
Examples
Example 1: filter rows, and retrieve selected columns
Example 2: join across tables and retrieve selected columns
Example 3: Summarize and count
Quite a few tailnum values in flights, are not present in planes, interesting!
Option 2: Write SQL syntax and have dplyr and dbplyr run the query
When I use this option
I use this option when I am reusing a fairly short, existing SQL querywith minor modifications.
Example 1: Simple selection of records using SQL syntax
Example 2: Use dplyr syntax to enhance a raw SQL query
Option 3: Store the SQL query in a text file and have dplyr and dbplyr run the query
When I use this option
I use this approach under the following conditions:

- I’m reusing existing SQL code or when collaborating with someone who will be writing new code in SQL
- The SQL code is longer than a line or two
I prefer to, “modularize,” my R code. Having an extremely long SQL statementin my R code doesn’t abstract away the complexity of the SQL query. Putting thequery into it’s own file helps achieve my desired level of abstraction.
In conjunction with source control it makes tracking changes to the definition of adata set simple.
More importantly, it’s a really useful way to collaborate with others whoare comfortable with SQL but don’t use R. For example, I recently used thisapproach on a project involving aggregation of multiple data sets.Another team member focused on building out the data collection logic forsome of the data sets in SQL. Once he had them built and validated he handed offthe query to me and I pasted it into a text file.
Step 1: Put your SQL code into a text file
Data Wrangling Dplyr Cheat Sheet
Here is some example SQL code that might be in a file
Let’s say that SQL code was stored in a text file called, flights.sql
Dplyr Left Join
Step 2: Use the SQL code in the file to retrieve data and execute the query.
