An HTTP client, taking inspiration from Ruby’s faraday and Python’s requests

Package API:

  • HttpClient - Main interface to making HTTP requests. Synchronous requests only.
  • HttpResponse - HTTP response object, used for all responses across the different clients.
  • Paginator - Auto-paginate through requests - supports a subset of all possible pagination scenarios - will fill out more scenarios soon
  • Async - Asynchronous HTTP requests - a simple interface for many URLS - whose interface is similar to HttpClient - all URLs are treated the same.
  • AsyncVaried - Asynchronous HTTP requests - accepts any number of HttpRequest objects - with a different interface than HttpClient/Async due to the nature of handling requests with different HTTP methods, options, etc.
  • HttpRequest - HTTP request object, used for AsyncVaried
  • mock() - Turn on/off mocking, via webmockr
  • auth() - Simple authentication helper
  • proxy() - Proxy helper
  • upload() - File upload helper
  • set curl options globally: set_auth(), set_headers(), set_opts(), set_proxy(), and crul_settings()
  • Writing to disk and streaming: available with both synchronous requests as well as async requests.

Mocking:

crul now integrates with webmockr to mock HTTP requests. Checkout the http testing book

Caching:

crul also integrates with vcr to cache http requests/responses. Checkout the http testing book

Installation

CRAN version

Dev version

devtools::install_github("ropensci/crul")
library("crul")

the client

HttpClient is where to start

Makes an R6 class, that has all the bits and bobs you’d expect for doing HTTP requests. When it prints, it gives any defaults you’ve set. As you update the object you can see what’s been set

You can also pass in curl options when you make HTTP requests, see below for examples.

do some http

The client object created above has http methods that you can call, and pass paths to, as well as query parameters, body values, and any other curl options.

Here, we’ll do a GET request on the route /get on our base url https://httpbin.org (the full url is then https://httpbin.org/get)

The response from a http request is another R6 class HttpResponse, which has slots for the outputs of the request, and some functions to deal with the response:

Status code

Status information

The content

HTTP method

Request headers

Response headers

All response headers - e.g., intermediate headers

And you can parse the content with parse()

Asynchronous requests

The simpler interface allows many requests (many URLs), but they all get the same options/headers, etc. and you have to use the same HTTP method on all of them:

The AsyncVaried interface accepts any number of HttpRequest objects, which can define any type of HTTP request of any HTTP method:

Execute the requests

Then functions get applied to all responses:

TO DO

Meta

ropensci_footer