Skip to main content

Posts

Showing posts from January, 2018

Flattening An Array

How do you flatten an array?  That is, how do you take something like this: [1, [2, [3, [4]]]] and change it into something like this? [1, 2, 3, 4] The first thing that occurs to me is that you would go through each element of the given array and check to see whether it is an array.  So we go through: 1 (NO) [2, [3, [4]]] (YES) We're done with the non-arrays -- we can keep them somewhere (another array) but the arrays require further processing.  We process them in exactly the way we processed the first array.  The trick is figuring out how to do that organically: we need the computer to keep processing elements of the array and members of those members recursively (!) on down to the elements. What I think you would do is create a general function that processes an array into elements, then call that function not only on the array itself, but on the members of that array.  How would that look? Well, if we have an element, we don't call it. ...

How Does A Calculator Work?

A calculator has 11 numeric entry buttons (0-9 and the decimal), 4 operation buttons (addition, multiplication, division, subtraction), two clear buttons (clear entry, clear everything), and a result button (equals). At the start of operations, the entry reads "0". The user then presses one of the buttons.  If the user presses "0", the entry continues to read "0".  If the user presses any other digit, the entry displays that digit.  The user can then compound that digit into a numeral by pressing additional digits. The user might choose, however, to press a non-numeral button.  Let's think about what happens if the user presses one of the operation buttons.  To make things more concrete, let's list some actions the user might take in a sequence.  (For the sake abbreviation, I will include entire numerals as the end-product of a sequence of actions (entering digits) rather than the individual entries by which the numeral is compounded.) So the...

What Is Success, For Me?

The secret to career success is self-branding: deciding what you represent, realizing those values, and communicating your achievement to colleagues and potential employers. Deciding what you represent can be the most difficult but is also the most important part of the entire process.  Success has many dimensions -- financial, social, philosophical (for lack of a better word), even moral; carefully determining what you value will help you to synthesize all these dimensions so that you can present a united front to people in your network – and so that in the end you can achieve something you will feel proud of.  Your values help to tell your story: accreditations, experience, accomplishments, and transitions are not so many bullet points to put on a resume but represent something for those who know what they’re about.  Those who don’t, on the other hand, become lists and amalgamations; they follow the market, get certificates, get an MBA, not because they feel this is the...

JavaScript Function For Symmetric Difference

EDIT: Note that the code below works (at least as far as I know) so long as you assume each array is formatted like a set.  If either of the arrays have repeating entries -- i.e. behave the way arrays are allowed to behave -- the algorithm won't work and is seriously in error.  Oh well -- back to the drawing board. The symmetric difference of two sets is the difference between their union and their intersection. Consider the following sets: A = {1, 2, 3, 4, 5} B = {1, 3, 5, 7, 9}  The union of A and B is {1, 2, 3, 4, 5, 7, 9}. The intersection of A and B is {1, 3, 5}. So the symmetric difference of A and B is {1, 2, 3, 4, 5, 7, 9} - {1, 3, 5} = {2, 4, 7, 9}. My job was to construct a two argument JavaScript function that would correctly return this result.  I decided to proceed as follows: 1. Concatenate the two arrays to produce their "union" (with duplicate values). 2. Remove any value that has a duplicate. Here is my code: function diffArray(...

Databased

I decided to add code to store the results from my scraper into a database instead of just printing them on the screen.  I also added a bit to capture the full review text as well as the title.  In the process, I learned a few things about SQL calls: You can't use WHERE with INSERT INTO If you want to add something to an existing row, you have to use UPDATE...WHERE I spent a lot of time trying to insert the review into the same row as its associated title.  At first, I was creating a new row for each review.  Then I tried to link the review with my i  counter and insert it into the i- th row.  But that went nowhere because (A) the database wasn't associating any numbers with the rows and (B) i  counts review pages.  I fumbled around a lot, got my count INTEGER NOT NULL PRIMARY KEY  (probably overkill -- I wanted to AUTOINCREMENT  too, but I guess I was putting in the wrong command), and then after watching dubiously as the databas...

Rudimentary Scraper

I'm toying with the idea of attending a coding camp, so I was looking at a review website .  I noticed I couldn't really read all the reviews I was looking at on one page or (as far as I could tell) sort them by ranking.  I thought how useful it would be if I could just figure out a way to download all the review data and put it into a database. Scanned the HTML for a bit, but didn't make any progress figuring out where the data I wanted was.  Did a few searches that turned up some complicated looking stuff (a tutorial from a company that devotes itself exclusively to HTML scraping), gave up, came back, and finally found this .  Unfortunately, the tutorial was written for use with Python 2, so I had to figure out how to modify the libraries -- now at least I understand a bit more about why Chuck's class imports urlopen .  After that, it was as easy as mousing-over the element I wanted on the review page and requesting the text. Needs further development in a...

Getting Geodata From Google's API

The apps I'm going to be analyzing are part of Dr. Charles Severance's MOOC on Python and Databases and work together according to the following structure (which applies both in this specific case and more generally to any application that creates and interprets a database using online data). The data source, in this case, is Google's Google Maps Geocoding API.  The "package" has two components: geoload.py  and geodump.py .  geoload.py  reads a list of locations from a file -- addresses for which we would like geographical information -- requests information about them from Google, and stores the information on a database ( geodata.db ).  geodump.py  reads and parses data from the database in JSON, then loads that into a javascript file.  The javascript is then used to create a web page on which the data is visualized as a series of points on the world-map.  Dr. Severance's course focuses on Python, so I'm only going to work my way through ...

I Don't Have To Reinvent The Wheel

As a final project for Week 5 of "Python For Everybody: 4," we had to work with an application that retrieves geographical data from a Google API for a list of locations, stores the data in a database, produces representations of the data in JSON, and then displays them visually in HTML -- a kind of home-made Google Maps. I didn't have to write any of this code.  All I had to do, really, was get an API key from Google and paste it into the code / HTML (because Google now requires API keys for JavaScript as well). That was the hard-way -- I could have kept the key  variable False  and just used a local version of the data.  But as it is, I now have a Google API project called "Geodata-xxx" and two keys.  Well, it isn't exactly Kingdom Hearts. What I would like to do in the next few posts is make sure I understand the code I used.  This is a point I've been thinking about since yesterday: the scariest thing whenever you have to write anything, code bei...

It's a Date

I guess I should really be putting these things up in GitHub.  The way I see it, the coding journal is just a place to share the code I write or study along with any notes I have about it.  It's sort of a documentation LiveJournal, if you will. Anyway, this is a "study" for my project idea: create an app that will prompt the user for two dates, then calculate the difference between them. The burden of this study is twofold: (1) convert dates in standard American form (e.g. December 15, 1993) into dates in standard American numeric form (e.g. 12/15/1993); (2) create a numerical representation of the date. To process the date, I started with a list of the months.  I then used a loop to create a dictionary that would attach a value to each month. Next I had to parse the user entry (I haven't added any debugging for incorrect entries yet). I did so by splitting the entry into "raw" data.  I used my dictionary to process the month name into a number, str...

Commentary On Multiple Database Twitter Crawler

#import libraries to open url's, interact with the Twitter API, use json, #use SQL, and ignore security certificates. import urllib.request, urllib.parse, urllib.error import twurl import json import sqlite3 import ssl #relevant Twitter API. TWITTER_URL = 'https://api.twitter.com/1.1/friends/list.json' #create the SQL database and cursor. conn = sqlite3.connect('friends.sqlite') cur = conn.cursor() #Create the People and Follows tables. People will list all the users #we've encountered and let us know whose friend lists we've retrieved; it #also assigns a unique number to each one. (Should be 1-1 between id's and #names.) cur.execute('''CREATE TABLE IF NOT EXISTS People     (id INTEGER PRIMARY KEY, name TEXT UNIQUE, retrieved INTEGER)''') cur.execute('''CREATE TABLE IF NOT EXISTS Follows     (from_id INTEGER, to_id INTEGER, UNIQUE(from_id, to_id))''') #creat...

Damn You, API Limits!

Today I've been playing around (or rather copying from a book that's playing around) with the Twitter API.  Below is a program that creates a SQL database for a given Twitter user listing that user's friends, whether that friend has been searches, and the number of times that friend has appeared in our searches.  I can't say I 100% understand how the machine works yet, but I  have a running commentary that sheds a bit of light on something I certainly did not understand the first couple of times I looked at it.  What the app is supposed to do is crawl through the friends of an account we select, then the friends of those friends, and so on.  The end result would be a Twitter network that tells us the number of people from the network who have "friended" each of its members.  It would also probably take years to create, since it can only be run 14 times a day... #Import all the necessary libraries to open internet connections (urlllib), #access the Twitter...