Today I continued with the website data module, which is starting to make more sense.
I learned:
There are two main ways to look at data from websites in a more elegant manner, manually or with ‘glom’. For instance, to get posts titles on reddit, one can use:
titles = []
For post in data[‘data’][‘children’]:
titles.append(post[‘data’][‘title’])
titles
Or, to do it with the directory ‘glom’:
pip install glom
from glom import glom
glom(data, (‘data.children’, [‘data.title’]))
Which does the exact same thing! I first got an error when importing glom but that was because I didn’t have the ‘pip install glom’ there.
The API for Reddit is called Json, and the API for twitter is called Tweepy.
Looking forward:
The module finished by going over how to import Twitter data, which one has to get a developer account and setup a bunch of other code. I think I understand the concept although that’s too much for me to go over right now. I’ve got class in like 10 minutes hah. So this shall be an issue for future Isaac.