When I read Rufus Pollock’s editorial on “Forget big data, small data is the real revolution”, it occurred to me that everybody, probably even I, could take advantage of what Pollock calls the “democratization of the masses”. In this post I will show how information can be “pulled together” using only basic programming skills. This information then can be used for improved decision making. The example that I decided to use to put this into practice might be the most universal conversation topic: weather
Practicing “Small Data”
Usually, I follow my interest in weather on a very basic level: I read the weather forecast. I try to use not the basic forecasts. Hence, I like to visit wetterzentrale.de because of the fantastic amount of information they make available, and the fantastic visualizations that forecast.io and forecast.io/lines present.
The unfortunate thing is that you kind of have to believe those products. I hadn’t seen a good weather map in a long time, until I was sailing recently at DHH Chiemsee, who make prints from the DWD analysis maps of air pressure at ground surface (together with annotations of observations) available on a daily basis.
The following ideas came to my mind:
- it would be very interesting to see the progression of these pressure maps over time
- since they are analysis maps, commonly still hand drawn, it would be interesting to compare them to other analyses, done by somebody else
- a description of the current situation associated with the pressure maps would be useful, so that an amateur like me gets some hints
With this information at hand, everybody could form their own opinion of the current weather situation in an improved way!
After some research, that did not take very long at all, I found some other sources on the internet, that allowed me to come up with the following map:
The code I wrote allows to create this plot at times that can be specified. The left column shows the current analysis performed by different institutions, the right column shows predictions performed by KMNI.
I wanted to do all this in python, so I needed to figure out how to get images from the internet and learned about the packages
Image (I didn’t know that there was a greyscale png), and
sched. Despite the fact, that there are still some (minor?) things that need to be ironed out (plotting of text with matplotlib, style of the headings) I put the code up on github.
I’d be very happy to hear what you guys think! Happy birthday Ferdi!
… and there’s a lot of water in that circle too…
via Very Spatial
I just experimented with twitter digests on this blog — a feature that has been broken since twitter did some changes to their api. I am suspecting that this might have lead / might lead to some blog posts showing up in your RSS feed, which are actually “just” twitter posts. Sorry for that inconvenience.
This is awesome, funny, shocking, and horrifying — all at the same time and for the entire four minutes! Quick demonstration about types in ruby and java script
via Hillary Mason
Last week, Thomas Herndon, an economics grad student, published a paper that refuted a renown economics paper authored by two Harvard professors on three accounts:
- some data was excluded from the analysis without stating the reasons;
- during processing of the data, a debatable method for weighting the data was used;
- there was a “coding error” – The authors had used MS Excel for their analysis and used a seemingly wrong range of cells for one calculation;
As far as I can tell, Mike Konczal was the first to write about the freshly published paper on April 16th.
First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don’t get their controversial result.
On April 17th, Arindrajit Dube, assistant professor at economics at the University of Massachusetts, Amherst (the same school of Thomas Herndon). He presents a short and concise analysis of the reasoning behind Herndon’s paper. One key analysis of his relates to the fact that different ranges of the data have a varying degree of dependence. In this case, the strength of the relationship [between growth and debt-to-GDP] is actually much stronger at low ratios of debt-to-GDP. From there he goes on to wonder about the causes of this changing relationship.
Here is a simple question: does a high debt-to-GDP ratio better predict future growth rates, or past ones? If the former is true, it would be consistent with the argument that higher debt levels cause growth to fall. On the other hand, if higher debt “predicts” past growth, that is a signature of reverse causality.
Future and Past Growth Rates and Current Debt-to-GDP Ratio. Figure’s source.
Looking at the data is one thing, but looking at causal relationships should always be related. A lot of people suggest that making data and analysis methods publicly available would prevent such errors. I agree to some extent. It is nice to see a re-analysis performed in python online. However, why did the authors not see these causal relationships? Did they not have enough time for a rigorous analysis? And would a rigorous analysis not be necessary for research that forms the basis for (current) political decisions?
[…] it’s just the usual (mis)use of economics research results. Politicians like the numbers that give them ammunition for their position
“Believable” research: If your results sound too good or too interesting to be true, maybe they are not, and you better check your calculations. Although mistakes are not uncommon, the business as usual part is that the results are often very sensitive to assumptions, and it takes time to figure out what results are robust. I have seen enough economic debates where there never was a clear answer that convinced more than half of all economists. A long time ago, when the Asian Tigers where still tigers, one question was: Did they grow because of or in spite of government intervention?
Stephen Colbert, of course, has his own thoughts, and has invited Thomas Herndon to chat with him:
- the connection to the dropbox folder still works
- it is faster (see updated chart)
- they updated python to 2.7.3 (and 3.2 if you like), and numpy to 1.6.2
- it is possible to run ipython and ipython notebooks!
- it is possible to schedule tasks!
- they extended possibilities for web servers and mysql
- latex, git integration
I became aware recently of three books that are related to data-analysis, modelling, and statistics in a fairly broad sense.
They are pictured below, from left to right:
- “Python for Data Analysis” by Wes McKinney (of pandas fame) published by O’Reilly
- “NumPy Cookbook” by Ivan Idris published by Packt Publishing
- “NumPy 1.5 Beginner’s Guide” by Ivan Idris published by Packt Publishing
Python for Data Analysis
This is the most in-depth book. It covered the most important python modules: iPython, NumPy, Pandas, Matplotlib. Additionally, it has chapters containing examples on practical issues with data (aggregation, data with a time-stamp, sorting). I really just started diving into it. However, it already led me to upgrade to iPython 13.1. It seems like it is well suited for my level of understanding of programming: Having some experience, trying to learn more existing tools
The title already gives it away: The book is organised in sections with “recipes”. Mostly, these recipes are self-containing. The focus is clearly on NumPy, even though Matplotlib, iPython, and also Pandas are covered to some extent. I enjoyed browsing through it, most of the examples are interesting (resizing images, playing with PythonAnywhere (like I’ve done before), f.ex). Generally, I think this is a great resource to have.
Despite being by the same author (Ivan Idris), there is positively little overlap between his two books. “NumPy 1.5″ covered NumPy in great detail, and is as such mostly useful for beginners who try to use python for some numerical analysis. When I read this book, I also was reminded, that the webpage that lists NumPy functions is a very valuable resource (which I tend to spend too little time with).
It is interesting to see that people realise that there is a market for books explaining open source tools. And I do think those books complement available documentation nicely.
I recently had some time series analysis to do, and I decided to do this work in Pandas. Particualarly, I had to deal with many timeseries, stretching from different startpoints in time to different endpoints in time. Each timeseries was available as an ASCII file. The data were daily data. The time information was given in three columns, each representing year, month, and day, respectively.
I found these two posts by Randal S. Olson very useful resources:
- Using pandas DataFrames to process data from multiple replicate runs in Python (link)
- Statistical analysis made easy in Python with SciPy and pandas DataFrames (link)
Here is a cookbook style layout of what I did:
The following steps show how easy it was to deal with the data 1. Read the input data
cur_sim_ts = pa.io.parsers.read_table(os.path.join(data_path, 'test', result) , header=None , sep='\s*' , parse_dates=[[0,1, 2070]] , names=['year','month', 'day', result[:-4]] , index_col= )
where `’\s*’ means any whitespace. The dates were given in three colums, one for year, month, and day. Can it get simpler and nicer than that?
It is possible to repeat 1 multiple time, each time extending the pandas data_frame. Unfortunately, this looks a little ugly still, but works
if counter_cur_file > 0: final_df = final_df.combine_first(cur_sim_ts) else: final_df = cur_sim_ts.copy() / 10.0
In the else part, the Pandas data_frame is initialised. It so happens that this and only this series of the loop has to be divided by ten. In all other cases the time_series that was read in step 1 is “appended” (or rather combined with) the the previously initialized data_frame. The wicked thing is that each time_series is put at the propper “place” in time within the data_frame. Dates are real dates. This is beautiful, but I had to be a little carfule with the data I had at hand, in which every month has 30 days.
As soon as this data_frame is constructed, things are easy, for example
plotting, particularly plotting only a certain time-interval of the data.
saving the data_frame
For me it was of particular interest to find out, how many consecutive dry and wet days there are in each time series. I introduced a threshold of precipitation. If the daily amount of precipitation is above that threshold, this day is considered to be “wet”, else it’s considered to be “dry”. I wanted to count the number of consecutive dry and wet days, and remember them for a time series. This is the purpose of the function below. It is coded a little bute force. Still I was surprised, that it performed reasonably well. If anybody has a better idea, please let me know. Maybe it can be of use for other Pandas users. Note: a time_series in the Pandas world is obtained by looping over a data_frame
def dry_wet_spells(ts, threshold): """ returns the duration of spells below and above threshold input ----- ts a pandas timeseries threshold threshold below and above which dates are counted output ------ ntot_ts total number of measurements in ts n_lt_threshold number of measurements below threshold storage_n_cons_days array that stores the lengths of sequences storage_n_cons_days for dry days storage_n_cons_days for wet days """ # total number in ts ntot_ts = ts[~ ts.isnull()].count() # number lt threshold n_lt_threshold = ts[ts <= threshold].count() # type_day = 0 # dry # type_day = 1 # wet # initialisierung: was ist der erste Tag type_prev_day = 0 storage_n_cons_days = [,] n_cons_days = 0 for cur_day in ts[~ ts.isnull()]: # current day is dry if cur_day <= threshold: type_cur_day = 0 if type_cur_day == type_prev_day: n_cons_days += 1 else: storage_n_cons_days.append(n_cons_days) n_cons_days = 1 type_prev_day = type_cur_day else: type_cur_day = 1 if type_cur_day == type_prev_day: n_cons_days += 1 else: storage_n_cons_days.append(n_cons_days) n_cons_days = 1 type_prev_day = type_cur_day return ntot_ts, n_lt_threshold, storage_n_cons_days
- With all of this, I can produce histograms like this:
update Monday; October 29, 2012 3:18pm (CDT) Eric Holthaus at the Wall Street Journal has a more in-depth analysis for the coming hours
I’ve been following the unfolding Sandy in the last couple of days. I’ve been relying of many online sources. Since I don’t have anything original to say, I decided to wait with a post here until things have settled down a bit.
However, I wanted to share this one chart of the water levels at “The Battery NY” (original available here. It shows observed (red dots) vs. modelled (pink and green) water levels. Note that the current water level is (slightly) higher than predicted. This has been true for the last couple of high tides, but those had smaller peaks.
The coming hours will be critical! There is a high tide coming up, coinciding with the landfall of Sandy. Additionally there’s a mid-latitude trough just east of the Great Lakes that pulls Sandy onto the North American continent. Additionally, as if that wasn’t enough, the North Atlantic Oscillation is in a negative phase, pulling Sandy towards the North-East. Decent summaries can be found here and here.