UT Libraries current students, staff, and faculty can access licensed data and datasets. Current users can also text and data mine licensed content with selected content providers. Freely available data sources and APIs are also available on the web. Please explore these sources to see what fits your needs. Contact a Librarian for additional help!
Users of screen readers and keyboard navigation may require assistance with this resource. Please contact eproblems@utk.edu for help.
You will need to register with a Guest User account with LDC. Make sure to select our Organization: University of Tennessee, Knoxville. Library administration will approve the account. Then, you can access the data directly through LDC.
If Roper Center content is inaccessible to you, please contact eproblems@utk.edu to request an accessible alternative format.
Users of keyboard navigation or of swipe/gesture navigation on Android/iOS may require assistance with this resource. Please contact eproblems@utk.edu.
If the content of a SAGE Research Methods article is inaccessible to you, please contact eproblems@utk.edu to request an accessible alternative format.
Users of swipe/gesture navigation should avoid using an iPad/iOS device on this website.
The official source for U.S. export and import statistics.
2011 data also available as zip files.
Screen reader users or users of keyboard navigation may require assistance and should avoid using Firefox on this website. Please contact eproblems@utk.edu for assistance.
The resources listed on this page may be text and data mined for academic scholarship or educational purposes. The list is organized by vendor/platform based on our UT license agreements with the vendor or publisher.
If you do not see a resource listed here, please contact us and we can investigate further. We will need time to review the license agreement and terms of use, so please plan accordingly. Carrying out automated text and data mining on a database that violates its terms of use is a violation of the University Libraries Electronic Resources Use Policy.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Adam Matthew. Adam Matthew requires a permission form be filled out and submitted before mining begins.
Contact: info@amdigital.co.uk
Permission provided for non-commercial educational and scholarly TDM from Cambridge University Press's Terms of Use.
Contact: directcs@cambridge.org
Web of Science's production team can create a custom data set based on set variables for a fee (contact librarian for additional help). Within Web of Science, you can use the Analyze tool to analyze a subset of data within the interface.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Elsevier and Elsevier's TDM Policy. Elsevier has a Developers Portal where you register to use their API Key. After registration you must request elevated privileges to receive full access to their data. There is a 20,000 records per week rate limit.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Emerald Publishing. Emerald asks that you notify them before conducting any TDM activities on www.emerald.com/insight to allow Emerald to manage server capacity. This will allow them to enable you to complete your activity without technical obstacles and to maintain access for all Emerald users.
Contact: permissions@emeraldinsight.com
Permission provided for non-commercial educational and scholarly TDM from our license agreement with Gale.
TDM of JSTOR content is permissible according to our License Agreement with some restrictions. Data for Research (https://www.jstor.org/dfr/) is JSTOR's TDM service. Datasets must be requested through JSTOR and are processed by JSTOR. Datasets are free and may include data for up to 25,000 documents. See their site for more info on creating datasets, specifications, requests, and sample datasets.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Oxford University Press.
Contact: Data.Mining@oup.com
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Project Muse. Project Muse requests that you contact them before beginning mining.
Text and data mining of our subscribed ProQuest content is available through ProQuest TDM Studio. Simply create an account and begin building datasets.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with SAGE . See TDM Info for request limits and other API information.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with SpringerNature. However, there are restrictions on storage of data. Springer Nature has a TDM Attachment to their License Agreement. Contact your librarian for more information.
Experience with Taylor and Francis shows that they are willing to accept TDM of their products when informed of the research involved and time frame. Contact your librarian for help.
Permission provided for non-commercial educational and scholarly TDM from University of Chicago Press's Terms and Conditions. Specifically requests for user to contact them for approval.
Permission provided for non-commercial educational and scholarly TDM from our License Agreement with Wiley.
Contact: TDM@wiley.com
TDM of EBSCO content is not permissible at this time.
TDM of NewsBank content is strictly prohibited in our License Agreement. Mining of NewsBank content will have an additional cost attached to it and may require additional licensing. Please contact your librarian for more information.
The main idea of the API is that you construct HTTP requests using the parameters described in the search endpoints documentation, and get back your results in JSON format. Some people may choose to just put together their own scripts using the appropriate tools for their language, such as Python requests and passing the response text to be loaded using standard library JSON processing tools. Another option is to look around the community for more specialized tools such as the twitteR package for R. You can also review data dictionaries for tweets, users, and entities (contextual information such as mentions and hashtags).