Monday, November 4, 2013

Hurricane Sandy, One Year Later

It's been a year since Hurricane Sandy hit the east coast, and although it's not talked about as much in the national media, things are still not back to normal.  Being born and raised in New Jersey and now living half a country away in Texas, I wanted to look into how bad things still are.  I visited the beaches of southern New Jersey last summer and everything looked pretty much the way I remembered it, but I know that other areas were much harder hit.

When I was home I was bombarded with these "stronger than the storm" commercials, and I thought it was strange that they seemed to only focus on the Jersey shore's recovery.  I got the impression from people I talked to that the majority of the recovery effort has been focused on New Jersey's moneymaker: its beaches.  While this at first made sense to me, the longer I thought about it the more unfair it seemed.  Many of the shore homes being rebuilt are people's second homes--a luxury item.  Sure it sucks to lose that shore house, but it sucks a lot more to lose your only house.  Anyone in that situation would feel pretty left out of the recovery, especially given how much attention is focused on rebuilding the shore.

I set out to visualize the damage from Sandy as well as the recovery.  I wanted to know if there was an income gap in the recovery, and whether there was a bias towards rebuilding the shore at the expense of inland areas.  It turns out, this was harder than I thought.

I started with housing assistance data from FEMA.  Here they break the total housing damage down by zip code and also list the amount of FEMA aid given.  Datasets are split among homeowners and renters.  Since the total damage is not listed for renters, I decided to base my analysis on the homeowners data.  This will skew my results towards higher-income families though, since most low-income families rent.

I wrote a Python script to parse this file and extract some values for analysis.  I initially set out to make maps by writing directly to a blank image of the state with its zip codes identified.  To do this, I used the Beautiful Soup package to parse the XML in a similar way to what I did in my first post.  However, I soon found that this was ugly and uninformative.  I needed to overplot my results onto a real map with cities listed so that people could identify where the damage is.  I looked all over for a Pythonic way to do this, but I never found anything as good as ggmap in R.  The only problem is that I don't know R very well.  My solution: do everything in Python and only write out what needs to be plotted in R!  I did the analysis, set the scale for the fill color, and created a mapping from zip code to fill color all in Python, then just dumped this out to be read by my R script.

Here's the first map, a map of total homeowner damage in the state.

source: Python R
As expected, the worst of the damage is near the coastal regions.  Also, the damage at the beaches north of Atlantic City was much worse than at the southern beaches I visited this summer.  That part makes sense too.

Now here's a plot of total damage minus the amount of FEMA housing aid.

source: Python R

That plot doesn't look too much different from the first.  The color overlay is lighter in South Jersey, indicating that much of the area has at least received the money to cover rebuilding costs.  Again, when I visited my parents near Philadelphia, that was pretty much the picture that I saw.

I also found data from the office of the Comptroller, compiling all the Sandy-related contracts awarded for recovery.  I can sum these up and subtract from the total damage to get another estimate for how recovery is going.

source: Python R

Again, it's the coast that still has the largest deficit between recovery assistance authorized and damage done.  Clearly these data are telling me that my initial understanding of the situation was wrong.  The state is right to spend more money on rebuilding the shore because that's where most of the damage was.

However, there's a flaw in this analysis.  I'm still only analyzing the housing damage reported by homeowners.  I'm missing a large group of the state's population, and probably those with the lowest income who were financially hit the hardest.  I've tried to account for this by scaling the damage by the mean income for each zip code in the FEMA claims, but this number is self-reported and not properly normalized by household.  The resulting maps didn't look much like anything so I dropped it.  In the end, I guess I learned that coastal storms do more damage to coastal populations.  Wow.  What a breakthrough.

Wednesday, September 25, 2013

(More) Fun with Congressional District Maps

My last post looked at the boundaries of Congressional Districts in the US and tried to draw some conclusions about the political motivations behind the drawing of their boundaries.  Specifically, I calculated the ratio of a district's perimeter divided by its area to find geometric oddballs--districts with funny shapes that I interpreted as evidence for gerrymandering.

It turns out I was on the right track, but didn't have things quite right.  A friend pointed out a link to a study that a professional geographer did on the same topic.  His analysis was pretty close to mine, except he used the the ratio of perimeter-squared over area.  This keeps the ratio dimensionless and is the proper way to do it.  My ratio was actually biasing my result towards finding smaller districts (which tend to be in urban areas, which tend to be Democratic, yadda yadda yadda).

So I re-did my calculation and here is the revised histogram using the correct ratio



You can see one or two Republican districts in the tail towards the right, but it's still a lot more blue than red.  A good sign that this is working is that I now have North Carolina's 12th, one of the most obviously gerrymandered cases, as the district with the highest ratio.

Piggybacking off another comment on my last post, I also decided to add a few more dimensions to my data set.  The commenter suggested that the perimeter-to-area alone might be indicative of natural boundaries that occur in urban areas, and not any malicious political intent.  So I decided to add some extra information and see if it makes gerrymandered districts pop out more obviously.

I added two more variables for each district: the margin of victory in the 2012 election, and the fraction of neighboring districts with the same party affiliation.  Remember that "packing" is the strategy of putting all of your opponent's supporters in a single district so that you can win the surrounding districts and come away with a net gain in Congressional seats.  As a result, districts that are examples of packing should have a comfortable margin of victory and should be surrounded by districts under the control of the opposite party.

This seemed like it should be easy, but finding the data and getting it into a neat form for analysis proved more difficult than I expected.  I took the results of the 2012 Congressional elections from the Federal Election Commission.  Unfortunately, their monstrosity of a spreadsheet tallied all the races (primary and general) for every candidate in every wacko third party for every seat.  This took some work to clean up.  After I got it in order, I tallied up the margins of victory for each district on a scale ranging from 0 (a dead heat) to 1 (a blowout with one candidate receiving 0% of the vote).

The next task was to go through each district and calculate the fraction of neighboring districts under the same party's control.  The first step involved finding the neighboring districts.  This took a lot of thinking.  I was going to do this by going through each district and calculating the distance to all the other districts' boundaries but that would have been horrible.  The boundaries are specified in 100 meter intervals, that means the arrays that hold them have hundreds of thousands of points.  If I'm calculating distances, the number of calculations I'd have to do is astronomical: for P points and D districts it would be $O(P^2 D^2)$.  With P around 100,000 and D=435, that number is huge.

Instead of doing things the brute force way, I got creative and realized that neighboring districts would share some of the same points in their coordinates.  If I could convert these coordinate arrays for each district into sets, I could take advantage of Python's blazingly fast set operations.  With a quick one-liner I can find if any two districts share common points (intersect) and the average-case time it takes for this operation is $O(P)$, a big improvement over $O(P^2)$.

Ok, so now that I've calculated everything I need, what do the data look like?  Here is a 3D plot of the perimeter-to-area ratio, margin of victory, and the fraction of neighboring districts under the same party control.  Dots are color coded according to the current controlling party.
3D scatterplot of all 435 Congressional Districts, broken down by perimeter-to-area ratio, margin of victory in the 2012 elections, and the fraction of neighboring districts under the same party's control.  Blue and red map to the current controlling parties in the obvious way.  Source code: district_findneighbors.pyexplore_district.py 

The most interesting cases are those with a large ratio (so they have a funny shape), high margin of victory (so they're primarily a single-party district) and a low fraction of neighboring districts under their party's control (so they're an isolated island).  These guys are prime examples of the "packing" strategy in action!



Gerrymandered districts show up in the lower right here
and the lower left here.
We can set up a rule for whether or not we label a district as gerrymandered.  For anything that lies in the area I've circled, it's a pretty safe bet that it's been gerrymandered.  Once again, the majority of these points are Democratically-controlled districts.  In my opinion, this shows that gerrymandering is indeed being done, and that it's the Republicans doing it now.  That's not to say that Democrats haven't done it in the past (they have).

As a final point, let's beat the dead horse that is North Carolina's 12th District.  This Democratically-controlled district shows up with the highest ratio, one of the largest margins of victory, and exactly ZERO neighboring Democratically-controlled districts.  If I've done nothing else, I've at least shown that politics in North Carolina are messed up.

Here it is, one more time.  The district that's been referred to as "political pornography".



Friday, September 13, 2013

Fun with Congressional District Maps

With Congress's approval rating hovering below 20%, it's safe to say that all sides of the political spectrum are unhappy.  When you consider that the 113th Congress is on pace to pass only 72 bills in its two-year session (compare that to the mere 900 bills passed by the 80th Congress that got it the nickname "the Do Nothing Congress") it's easy to see why.  Add to that the sharp uptick in partisan vitriol, and I think both Democrats and Republicans would agree that our lawmakers are more polarized now than ever.

So if you were to ask a political analyst why things are this way, I'm betting that many of them would suggest the cause, at least in the House of Representatives, is due to Gerrymandering of congressional districts.  The term Gerrymandering refers to the strategy of manipulating the boundaries of voting districts so as to gain an advantage over your opponent.  There are two strategies employed known as "packing" and "cracking".  The goal of packing is to isolate as many of your opponents into a single district, which you will concede, so that you can win a larger number of the surrounding districts.  Cracking does the opposite, it aims to dilute the strength of a district that is controlled by your opponent by mixing in your own supporters.  These two strategies are usually used in concert for the greatest effect.

Packing, in particular, tends to produce districts that elect Representatives with extreme political views.  Since the district is essentially a single-party district there's no challenge in the general election, and hence no incentive to run a moderate candidate.  This, I believe, is one of the more serious problems with our current Congress, and it got me wondering if it can be traced directly to the maps of Congressional districts.

What I needed was a diagnostic of a Gerrymandered district.  They're easy to spot by eye (they look kind of like salamanders, which is what coined the term in the first place) but I had no intention of looking through all 435 Congressional districts for oddities.  I needed an automatic classifier that would be able to separate obviously-gerrymandered districts like North Carolina's 12th:


from districts with relatively normal boundaries like Georgia's 2nd:


I decided to use the ratio of a district's perimeter to its area as a proxy for gerrymandering.  For districts like Georgia's 2nd that are close to rectangular, this number is relatively small.  However, districts like North Carolina's 12th, with all its twists and turns, should have a much larger ratio relative to normal.

So I wrote a script to compute this ratio for every district and make a map of the districts with the largest ratio.  Ok, so how did it do?  The answer is it performed surprisingly well given the very limited input information.

Here are some of the maps I made of districts with the highest ratio of perimeter to area:


Source Code
View Larger
View Larger
View Larger
and here are some districts with a low ratio:
View Larger
View Larger
View Larger
Notice how the obviously gerrymandered districts all occur around urban areas?  That's not by accident, it's a great example of "packing" in action.  After a quick inspection, I'm feeling pretty confident in my perimeter-to-area classifier's ability to pick out gerrymandered districts.  So let's see what else we can learn from this.

Here's a histogram of all the districts' ratios broken down by which party currently has control of the district.  
Every district with a ratio greater than about .0005 is Democratically-controlled.  If my classifier is working, that means that every one of the most gerrymandered districts is a Democratic district.  Doesn't that sound like shenanigans?  Source Code
I found it remarkable that all of the districts in high-ratio tail of the distribution are Democratically-controlled.  This suggests that Republicans are winning the gerrymander battle and successfully concentrating Democrats into "packed" districts.  Maybe that's how, despite winning more votes than Republicans, Democrats currently own a 234-195 minority in the House.  Representative democracy in action!

Tuesday, June 18, 2013

Understanding Aircraft Bird Strikes: (Messing Around with Tableau)

Ok I'll admit it, I'm a bit of a nervous flyer.  I'm the guy on the plane who looks around at all the flight attendants after every little bump or noise to make sure nothing's wrong.  So when a United Airlines 737 was taken down by a flock of geese, that was the last thing I needed to hear about.  I had no idea something as small as a bird could have such an impact on a 30 ton aircraft.

And then there was that episode of Mythbusters where they shot chickens out of a cannon into an old aircraft frame to see the damage.

Even though it turns out they were shooting frozen chickens (they forgot to thaw them), the results were still scary.

So how frequent are bird strikes?  And when do they occur?  I was given a dataset containing information about all the recorded bird strikes over the past 12 years.  The first thing I noticed was that the file was almost 100,00 lines long.  Crap.

Fortunately, out of these 100,000 strikes there were only 107 injuries and 5 fatalities.  Humans seemed to fare much better than the birds.  In order to get the most out of this rich dataset, I used Tableau--a data visualization product that can make some fancy interactive visualizations.

This is what I came up with

To get a full appreciation for the interactive nature of Tableau you have to click the link above.  Here are some highlights.

I had no idea that bird can fly as high as 25,000 feet!  It makes sense that some of the most damaging (costliest) strikes occurred when the plane was traveling fast at a higher altitude.

Texas and California have had the most strikes, but this is a bit misleading.  They are also the most populous states, and have some of the nation's largest airports.  There are many more flights originating in these two states compared to other smaller states.  That means there are many more chances for a strike.

The majority of strikes occur in clear skies during the day.

Monday, June 3, 2013

Analyzing the Semantic Web

As part of an assignment for a class I'm taking on Coursera, I recently played around with a big data set representing the Semantic Web (an attempt to organize the world into a network of data that's capable of  being understood and manipulated without human input).  One of the dreams of this project is to allow machines to interact with the entirety of the world wide web without any direct human input.

The data set that describes the Semantic Web is a classic example of Big Data.  The set I used for my analysis is known as the Billion Triples dataset.  It contains a billion entries in a format known as RDF triples.  This format requires statements to be expressed in the form:

Subject Predicate Object

For example, to say something like "Bob is Tim's father" in RDF you would write

Subject:[Bob] Predicate:[is the father of] Object:[Tim]

In the Semantic Web, the entities in the RDF triples are not confined to only people, but can be anything described by a URI (Universal Resource Identifier)--literally anything.  The Billion Triples database therefore contains a ton of information describing the relationships of interconnected people/places/things/companies.  We can treat this set as a directed graph where each entry represents an edge between two vertices (the subject and object).

An example of a directed graph.  If this were to represent the Semantic Web, each of the circles would be represented by a URI.  The arrows are drawn whenever a subject points to an object in RDF.

Analyzing graphs like this is a standard task for data scientists, and one common quantity to compute is the out-degree of each node.  This represents the number of objects that a given subject links to.  In the example graph above there are 2 arrows leaving node 7.  Therefore node 7 has an out-degree of 2.  We can learn a lot about a graph by computing a histogram of the out-degree of all its nodes.

However, the Billion Triples data set is so large (>0.5 TB) that a single computer can't handle it.  Instead we have to use a cluster of computers, each operating on a small chunk of the data.  This situation refers to a classic computer programming model known as MapReduce in which common tasks get mapped to a single machine which then performs some simple aggregate function (like counting) in the reduce step.

When files are this large, they can't be stored in traditional file systems.  Instead, distributed file systems like Hadoop break up large files into many smaller chunks stored on separate machines across a network.  In this way, file access can be parallelized and you can scale up your MapReduce jobs into the cloud.  This is exactly what I did for my analysis of the Semantic Web.

Using Amazon Web Services, I pulled the data set from S3 storage and analyzed it on their cloud computing servers.  To setup the MapReduce jobs, I used the programming tool Pig.  This tool coordinates the resources of the 20 compute nodes I ran my script on.  In my Pig script I group the entries by subject, then count the number of entries for each subject (the out-degree).  Then I count the number of subjects having out-degree d for my histogram.  After 4 hours of running in the cloud, I get the following plot:

The distribution of subjects having various values for their out-degree.  Fits to the data were obtained with the optimize.curve_fit package in SciPy.


The code I wrote to generate this plot fits an exponential function (blue) and a powerlaw function (red) to the distribution.  A result from graph theory is that the out-degree histogram of a random graph will follow an exponential distribution.  Since the blue line is a terrible fit to the data, we can tell that the Semantic Web is not a random graph. Instead, we can see that a powerlaw with index -2.2 does a pretty good job at fitting most of the data.

This index of -2.2 in a powerlaw distribution is a number that frequently comes up in the analysis of networks.  Researchers studying the patterns of phone calls among AT&T's customers also find the exact same distribution with the exact same index.  There is something fundamental in the way networks organize themselves such that they produce powerlaw histograms with an index near -2.2.  I find it really interesting that a network of telephone calls among friends displays the same properties as a catalog of the relationships between everything in the Semantic Web.  Even more interesting was the cloud computing I got to do (for free with a student grant from AWS, I should add!).

Thursday, May 23, 2013

My First Blog Post: A Look at Severe Weather in the US

Following Monday's devastating tornado in Moore, OK, I've been thinking about all the severe weather that's been in the news.  Whether it's hurricanes in New Jersey or two EF-5 tornados hitting the same town in a 15-year span, it just seems like there's always some natural disaster on TV.

And yeah, if this is starting to sound like crazy talk I feel the same way.  People tend to emphasize recent tragedies and forget about the (equally bad things) that happened in the past.  Plus, I'm pretty sure there's always a group of people who want to think that they're living in the End of Days.  Still, with global temperatures on the rise, there is the possibility that a hotter climate is fueling more severe storms.

So, can we put this to the test?  Have the past few years had abnormally bad weather?  I started trying to answer this question with the NOAA's Storm Prediction Center.  They have archives of all the reported tornados that touched down in the US over the past 60 years.  I wrote a quick web crawler to grab data from two separate time periods: 2000-2004 and 2010-2012.  The 2000-2004 data will serve as a baseline and I will look to see if there are more torandos per year in the 2010-2012 set.  Right off the bat I can tell you that I don't remember any bad tornados from 2000-2004 and I can name at least 2 from the more recent years (Joplin, MO and Tuscaloosa, AL).  Let's see if I'm remembering recent disasters at the expense of older ones. 

Below are two maps I made: the average number of tornados per year over the two time periods.  Results are broken up by state.


These maps were made from the NOAA's Storm Prediction Center Archive
It does appear that there were more tornados in the second period (over 300 more per year).  Also the number of tornados in the so-called "Dixie Alley" (Alabama, Mississippi, and surrounding areas) has risen sharply.   But is any of this significant?  My guess is probably not, although I don't have time to run a significance test on it.  It's possible that the number of storm chasers has gone up from 2000-2012 and more tornados that would have gone unreported are now being documented.

 I also made maps of the strong tornados that occurred during these time periods.  I classified a strong tornado as EF-3 or greater.  These twisters are damaging enough to take the roof off your house at EF-3, but fortunately they are much less common.  The next two maps show the average number of strong tornados per year and you can see that they're much less frequent than their weaker counterparts.


Average number of strong (EF-3 or greater) tornados per year.

Ok, those two maps actually look different!  Indeed, there were over twice as many strong tornados on average in a given year from 2010-2012.  I'm also less tempted to dismiss the differences as due to a recent increase in storm chasers.  My reasoning behind this is that these strong tornados are hard to miss.  I would think it's much easier to miss out on an EF-0 twister in MiddleOfNowhere, Kansas than one of these monsters.  Then again, I'm not a meteorologist, this is just an educated guess.

It's tempting to see this and shout "global warming is killing us all!" as you run back to your underground bunker that was deserted after December 22, 2012.  But keep in mind these are very small numbers being compared over a very short time baseline.  To make a better argument for climate change, I would have to go back and look at historical records from 50 years ago.  Even if I did this, I'd still have to deal with the fact that mobile Doppler radar trucks and cell phone videos weren't around 50 years ago to document every tornado.  I guess I'm stuck, but I did learn a few things (namely how to parse XML with python, and retrieve lots of data with an automated web crawler).  This was kind of fun.