Returns public statuses via one of the following four methods:

  • 1. Sampling a small random sample of all publicly available tweets

  • 2. Filtering via a search-like query (up to 400 keywords)

  • 3. Tracking via vector of user ids (up to 5000 user_ids)

  • 4. Location via geo coordinates (1-360 degree location boxes)

Stream with hardwired reconnection method to ensure timeout integrity.

stream_tweets(q = "", timeout = 30, parse = TRUE, token = NULL,
  file_name = NULL, verbose = TRUE, ...)

stream_tweets2(..., dir = NULL, append = FALSE)



Query used to select and customize streaming collection method. There are four possible methods. (1) The default, q = "", returns a small random sample of all publicly available Twitter statuses. (2) To filter by keyword, provide a comma separated character string with the desired phrase(s) and keyword(s). (3) Track users by providing a comma separated list of user IDs or screen names. (4) Use four latitude/longitude bounding box points to stream by geo location. This must be provided via a vector of length 4, e.g., c(-125, 26, -65, 49).


Numeric scalar specifying amount of time, in seconds, to leave connection open while streaming/capturing tweets. By default, this is set to 30 seconds. To stream indefinitely, use timeout = FALSE to ensure JSON file is not deleted upon completion or timeout = Inf.


Logical, indicating whether to return parsed data. By default, parse = TRUE, this function does the parsing for you. However, for larger streams, or for automated scripts designed to continuously collect data, this should be set to false as the parsing process can eat up processing resources and time. For other uses, setting parse to TRUE saves you from having to sort and parse the messy list structure returned by Twitter. (Note: if you set parse to false, you can use the parse_stream function to parse the JSON file at a later point in time.)


OAuth token. By default token = NULL fetches a non-exhausted token from an environment variable. Find instructions on how to create tokens and setup an environment variable in the tokens vignette (in r, send ?tokens to console).


Character with name of file. By default, a temporary file is created, tweets are parsed and returned to parent environment, and the temporary file is deleted.


Logical, indicating whether or not to include output processing/retrieval messages.

Insert magical parameters, spell, or potion here. Or filter for tweets by language, e.g., language = "en".


Name of directory in which json files should be written. The default, NULL, will create a timestamped "stream" folder in the current working directory. If a dir name is provided that does not already exist, one will be created.


Logical indicating whether to append or overwrite file_name if the file already exists. Defaults to FALSE, meaning this function will overwrite the preexisting file_name (in other words, it will delete any old file with the same name as file_name) meaning the data will be added as new lines to file if pre-existing.


Tweets data returned as data frame with users data as attribute.

Returns data as expected using original search_tweets function.

See also

Other stream tweets: parse_stream


## stream tweets mentioning "election" for 90 seconds
e <- stream_tweets("election", timeout = 90)

## data frame where each observation (row) is a different tweet

## users data also retrieved, access it via users_data()

## plot tweet frequency
ts_plot(e, "secs")

## stream tweets mentioning Obama for 30 seconds
djt <- stream_tweets("realdonaldtrump", timeout = 30)

## preview tweets data

## get user IDs of people who mentioned trump
usrs <- users_data(djt)

## lookup users data
usrdat <- lookup_users(unique(usrs$user_id))

## preview users data

## store large amount of tweets in files using continuous streams
## by default, stream_tweets() returns a random sample of all tweets
## leave the query field blank for the random sample of all tweets.
  timeout = (60 * 10),
  parse = FALSE,
  file_name = "tweets1"
  timeout = (60 * 10),
  parse = FALSE,
  file_name = "tweets2"

## parse tweets at a later time using parse_stream function
tw1 <- parse_stream("tweets1.json")

tw2 <- parse_stream("tweets2.json")

## streaming tweets by specifying lat/long coordinates

## stream continental US tweets for 5 minutes
usa <- stream_tweets(
  c(-125, 26, -65, 49),
  timeout = 300

## use lookup_coords() for a shortcut verson of the above code
usa <- stream_tweets(
  timeout = 300

## stream world tweets for 5 mins, save to JSON file
## shortcut coords note: lookup_coords("world")
world.old <- stream_tweets(
  c(-180, -90, 180, 90),
  timeout = (60 * 5),
  parse = FALSE,
  file_name = "world-tweets.json"

## read in JSON file
rtworld <- parse_stream("word-tweets.json")

## world data set with with lat lng coords variables
x <- lat_lng(rtworld)

# }