Yesterday, I saw forum post about predictions for when a video game (DNF) would get released. That prompted me to generate a multi-year vertical bar chart graphing the number of predictions per month/year over the last several years and I felt like sharing the process.
I love making graphs. I also love using Google’s Chart API for making quick but decent looking graphs. They get the job done easily and since it is an HTTP API, it’s easily to manipulate, share, and program.
I’ve been doing a fair amount of screen-scraping and reverse-engineering of web applications recently and wanted to share my workflow. When working with dynamic web applications whose output is dynamic or unknown, Python’s REPL (Repeat Eval Print Loop) combined with IPython is great choice for quickly getting something working. Python’s built in libraries combined 3rd party libraries
Say I wanted to emulate a normal user in a web browser. First I would make a request to a homepage, then I would browse deeper into the website.
Last month, Google Reader announced support for accessing user data via OAuth. Previously, access was unofficially allowed using the ClientLogin method, which required the user’s login and password. OAuth seems to be the recommended access method for the future, due to security it provides for a user. I’ve finally had a chance to figure out OAuth using Python and how to get your Reader data, so I wanted to share my method.
Many years ago, when I switched to OSX from Windows, I quickly knew it was the right choice to switch. As the years have passed and my knowledge of all three major platforms(OSX, Windows, and Linux) has increased, I’ve become more confident and happy with my choice of OSX. While the OS defines how you interact with the software, but the software defines what you can accomplish and enables you to do so.
As a programmer and a part-time sysadmin, I spend a huge amount of time in the OSX Terminal and find it be one of the better CLI environments I’ve used, after some tweaking. Windows + Powershell or CMD (shudder) is just terrible. Linux with Xterm comes close, but doesn’t have the same easy usability that I enjoy about the OSX Terminal. Today, I’d like to share my tweaks and explain what I enjoy about them.
MacFusion is a great little app that allows you to mount network locations over SSH, which I’ve mentioned before. With the latest 10.6.3 update in OSX, the latest version MacFusion 2.0.3 breaks. Unfortunately, the developers of MacFusion haven’t touched the app in over 2 years.
I found a fix on the MacFusion Google Group. A third-party developer, nall, fixed the problem and updated the binary to 2.0.4, available at http://github.com/downloads/nall/MacFusion2/Macfusion-2.0.4-SL.zip. This version works fine for me on 10.
I’ve been writing on this site for a couple years now, slowing figuring out how to organize things and finding out what I like to write about. I’ve finally come the conclusion that it’s time to split into two sites, asktherelic.com and thebehrensventure.com.
Ask the Relic (this blog) will focus on programming, sys admin, web dev, and other tech things I’m trying to make a career out of, while TheBehrensVenture will focus on my random travels, stories, and adventures.
I love the punchcard graph on GitHub, showing the hourly/daily/weekly output of a project in a nice and neat format. I decided to apply the punchcard format to my Bash history, one of the random bits of data I have lying around.
By default, Bash just stores history commands sequentially with a number and the command. To store the command with a timestamp, you must set the HISTTIMEFORMAT variable in your bashrc.
I finally switched all of my projects over to git or git-svn and have never been happier. Everything has so many more options than svn, everything is faster, and the universe of software for git is way better than svn. Switch now!
Awhile back I wrote a command to print the total number of lines contributed per author for my svn repository because I wanted to see how awesome I am.
I’m working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren’t, I wrote this command up.
svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r Output: 2038 matt 433 john 263 ryan 186 alice 167 bob This command will print an full repository listing of all files, remove the directories, run svn blame on each individual file, and tally the resulting line counts.