Installing Curl and JQ

We’re going to use curl to fetch the JSON feed from Reddit and  jq to parse the JSON data and extract the fields we want from the results. Install these two dependencies using apt-get on Ubuntu and other Debian-based Linux distributions. On other Linux distributions, use your distribution’s package management tool instead.

Fetch Some JSON Data from Reddit

Let’s see what the data feed looks like. Use curl to fetch the latest posts from the MildlyInteresting subreddit:

Note how the options used before the URL: -s forces curl to run in silent mode so that we don’t see any output, except the data from Reddit’s servers. The next option and the parameter that follows, -A “reddit scraper example” , sets a custom user agent string that helps Reddit identify the service accessing their data. The Reddit API servers apply rate limits based on the user agent string. Setting a custom value will cause Reddit to segment our rate limit away from other callers and reduce the chance that we get an HTTP 429 Rate Limit Exceeded error.

The output should fill up the terminal window and look something like this:

There are lots of fields in the output data, but all we’re interested in are Title, Permalink, and URL. You can see an exhaustive list of types and their fields on Reddit’s API documentation page: https://github.com/reddit-archive/reddit/wiki/JSON

Extracting Data from the JSON Output

We want to extract Title, Permalink, and URL, from the output data and save it to a tab-delimited file. We can use text processing tools like sed and grep , but we have another tool at our disposal that understands JSON data structures, called  jq . For our first attempt, let’s use it to pretty-print and color-code the output. We’ll use the same call as before, but this time, pipe the output through  jq and instruct it to parse and print the JSON data.

Note the period that follows the command. This expression simply parses the input and prints it as-is. The output looks nicely formatted and color-coded:

Let’s examine the structure of the JSON data we get back from Reddit. The root result is an object that contains two properties: kind and data. The latter holds a property called children, which includes an array of posts to this subreddit.

Each item in the array is an object that also contains two fields called kind and data. The properties we want to grab are in the data object.  jq expects an expression that can be applied to the input data and produces the desired output. It must describe the contents in terms of their hierarchy and membership to an array, as well as how the data should be transformed. Let’s run the whole command again with the correct expression:

The output shows Title, URL, and Permalink each on their own line:

Let’s dive into the  jq command we called:

There are three expressions in this command separated by two pipe symbols. The results of each expression are passed to the next for further evaluation. The first expression filters out everything except the array of Reddit listings. This output is piped into the second expression and forced into an array. The third expression acts on each element in the array and extracts three properties. More information about  jq and its expression syntax can be found in jq’s official manual.

Putting it All Together in a Script

Let’s put the API call and the JSON post-processing together in a script that will generate a file with the posts we want. We’ll add support for fetching posts from any subreddit, not just /r/MildlyInteresting.

Open your editor and copy the contents of this snippet into a file called scrape-reddit.sh

This script will first check if the user has supplied a subreddit name. If not, it exits with an error message and a non-zero return code.

Next, it will store the first argument as the subreddit name, and build up a date-stamped filename where the output will be saved.

The action begins when curl is called with a custom header and the URL of the subreddit to scrape. The output is piped to  jq where it’s parsed and reduced to three fields: Title, URL and Permalink. These lines are read, one-at-a-time, and saved into a variable using the read command, all inside of a while loop, that will continue until there are no more lines to read. The last line of the inner while block echoes the three fields, delimited by a tab character, and then pipes it through the tr command so that the double-quotes can be stripped out. The output is then appended to a file.

Before we can execute this script, we must ensure that it has been granted execute permissions. Use the  chmod command to apply these permissions to the file:

And, lastly, execute the script with a subreddit name:

An output file is generated the same directory and its contents will look something like this:

Each line contains the three fields we’re after, separated using a tab character.

Going Further

Reddit is a goldmine of interesting content and media, and it’s all easily accessed using its JSON API. Now that you have a way to access this data and process the results you can do things like:

Grab the latest headlines from /r/WorldNews and send them to your desktop using notify-send Integrate the best jokes from /r/DadJokes into your system’s Message-Of-The-Day Get today’s best picture from /r/aww and make it your desktop background

All this is possible using the data provided and the tools you have on your system. Happy hacking!