Retrieve Starbucks Gift Card Balance with Python

Wrote a quick script to scrape the current account details and balance on the primary card from the German version of the Starbucks website.

It uses mechanize to simulate a login and BeautifulSoup 4 in order to extract the relevant information. The output of the script looks something like this:

Account status: gold
Stars until next free drink: 2
Number of free drinks: 3
"Hallo Frank"
Cards:
Primary card: card1
        Card: card2
Balance of primary card (gold): 14,20 €
(Updated: 13.01.2014 22:12)

And since we are all lazy, the gist below also contains a version that I use to get daily balance reports via e-mail.

Note: The code presented below is using the German Starbucks website. Since it looks similar to the US version, only minor tweaks should be necessary to make it work universally.

 

Little Robot Update

 

I’m currently working on making the robot a bit more eco-friendly ;-) (okay, granted.. I’m too lazy to recharge the battery all the time). The relays are attached to the main control board which is powered through a netbook all the time, this way the motors can be shutdown and only powered when movement is required.

The software is currently being rewritten to extend the runtime per battery charge as well. I’m working on setting the ATmega328 chips (running the head, motor and sensor circuits) into sleep mode unless interrupts are received on the serial RX/TX pins. The code for this will be on github (soonish).

Screenshot

Consumer EEG Blink Detection

 

I recently had time to tinker with my NeuroSky MindWave - a cheap passive consumer EEG. I’ve decided to build a simple menu controlled by deducing blink events from the raw data stream the device offers to developers. All code discussed here is available in a GitHub repository.

 

Hardware Setup

For now the device is connected to a PC using the supplied radio receiver that functions as an A/D converter and supplies a virtual serial port.

For more information on the device itself, check out the product page or the wikipedia article.

 

Parser Implementation

Though NeuroSky offers quite a few abstraction layers on top of the serial data stream, I’ve decided to take the direct approach by communicating with the device over a serial connection. The motivation for this was that I eventually want to do all of this processing on a micro processor anyway. The API reference is available as part of the MDT at the NeuroSky wiki. It provides detailed explanations of the serial data stream and good guidelines for implementing a parser.

The parser is written in Java as part of the Processing sketch at https://github.com/FrankGrimm/readmind/tree/master/processing/readmind. It uses the processing serial library to communicate with the receiver and processes all incoming data in a seperate thread. This was necessary because the frequency of incoming input data (for the raw signals) is much higher than the number of drawings per second. In order to perform statistical analysis in near real-time the resolution of the current state at the time of drawing the visualizations simply wasn’t sufficient. Whenever a valid packet payload is decoded, the background thread notifies different components of the incoming new datapoints. These components are primarily data buffers for the visualization and several classes that perform live evaluations on the signal data for detecting blink events.

While implementing the parser (and fixing bugs… and fixing bugs…) I’ve hit quite a few pitfalls:

Most applications for this device rely on attention and meditation levels. While these values are calculated by an proprietary, undisclosed, algorithm they could have come in handy when interpreting the raw data points. Especially the attention level seems to correlate highly with the reliability of blink events. While the implementation guide specifies signal codes for attention and meditation levels, as well as blink event signals, none of those packet types are sent by the device I own. In case you need these values the best way to obtain them would be to use one of the other high-level APIs or play man-in-the-middle between the official socket wrapper and the serial device. The approach discussed in the next section provides “good enough” results for detecting blink events though.

The signal quality, especially of the raw signal, depends highly on a firm seating and minimal facial movement during the usage of the device. If there’s too much activity, the reference wave and the measure of the EEG sensor will differ, resulting in larger ampltitudes of the raw signal and unclear readings. While the protocol offers signal quality packets, these are only an indication of the connection between the radio transmitter and the USB dongle. This behaviour results in a lot of false positives, which are an open issue on my todo-list for this implementation. The only way to deal with those rapid changes in amplitudes are to ignore rapid successive blink events, which leads to false negatives when the user simply blinks fast enough.

The low resolution of the signal values for the 8 EEG bands makes the data pretty inconclusive for any useful live interpretation from my point of view. I’ll have to figure out if I can do some kind of attention / meditation aggregation on those values myself at some point.

Detecting the device

I dislike fixed device IDs so I wrote a rudimentary synchronous version of the parser that enumerates all available serial ports and tries to read a full packet of valid (as in valid checksum) MindWave data. If no such packet is received in the first few hundred bytes, or a timeout is hit, the device id is discarded and the code moves on to the next serial port. The implementation for this is contained in the class MindwaveDeviceFinder in mindwave_find.pde.

 

Visualization (Processing)

The parser thread continuously updates several buffers, one large buffer for 10240 raw values and smaller buffers for each of the 8 supported EEG bands. The bands are not updated often so smaller buffers are sufficient. The implementation for this ValueBoxDrawer class holds and draws the buffered data. It is located in readmind.pde. I implemented a branch where all of the band graphs share a common y-axis scale but this basically lead to some bands not being readable at all so I’ve discarded that approach. For now, all the graphs y-axis are scaled to showing MAX(buffer)-MIN(buffer) of values. The x-axis depend on the size of the buffer that’s plotted by the instance of ValueBoxDrawer (all band buffers have the same size, the raw buffer is much larger).

The last read signal quality packet (0-255) is displayed in a little bar graph, similar to the ones seen on mobile phones. The value is not buffered because it is not really used for anything other than user information at the moment.

The major portion of the screen is used by a simple rectangular menu section (containing menu items and sub-menus). This menu can be controlled by mouse events and blinking (this is discussed later on).

readmind-01

 

Visualization (R)

The repository contains a few R scripts (get R here) I’ve used for visualizing and analyzing the raw signal and band data.

Both enumerate all the exported raw- and band-data files in the working directory and plot the results of several calculations in the same directory.

Script 1 graph-means.r

This script exports a visualization of the raw signal data together with some simple moving averages that have been used to estimate factors in the processing implementation (see section “Blink detection”).

20120520-153102-pr

 

Script 2 graphs-rawband.r

This script exports visualizations of all the raw signal data, as well as the bands in three different files (raw-only, color coded bands-only and a combined image).

20120520-153102-co

Combined graphs

20120520-153102-raw

Raw graph

20120520-153102-bands

Band graph

 

Blink detection

// works for my brain

For this first iteration, a simple moving average is calculated over the last Navg (=400) raw values. Everytime a new raw value is received, the oldest raw value in the queue is subtracted from a cumulative sum and the new raw value is added. By dividing this sum by the size of the buffer, the unweighted average over the full size of the buffer is calculated. The average value has the same dimension as the raw values and can be used to do simple comparisons between the current average and the current raw value.

The average value is then multiplied by a static factor. All peaks in amplitude above this value are considered part of a blink event.

I’ve used the R script (graph-means.r in the repository) to tinker with the size of the buffer (Navg) (the cardinality of the subset of all raw data received at a given time that’s factored into the moving average), as well as the scalar factor (= 2.5) used to separate regular noise / signal movements from peaks that could potentially be associated to blinking.

20120520-153102-pr

The Java implementation for the same analysis on live data is located in the blinkdetect.pde source file. Using these peak amplitudes as they occur would result in a lot of false positives and a large number of short, duplicate events that would normally be attributed to the same blink. To combine closely related peaks and discard outlying values that don’t appear to be related to other peaks I’m using an instance of BitSet (with a fixed size of Nblink (with Nblink < Navg, = 100)). Whenever this smaller buffer accumulates enough peaks (>= 30) a blink is considered to be started and the current time (in milliseconds) is recorded. If a blink event is started and the number of peaks in this buffer drops below that same value, the blink is considered ending. In this case, the duration is calculated and a blink event is triggered in the main class (by invoking the blinkHandler method).

Controlling the menu

In order to deal with the high number of false positive detections of blink events I’ve introduced a timer to activate menu items and ignore certain blink events. The currently set duration of the timer represents the state of the current activation of a menu item.

  1. Regular state: No menu item is selected, the menu succeeds to the next item in the list every 700ms. This succession is only applied in the regular state.
  2. Initial state: A blink event occured. To prevent large numbers of false positives (and to prevent ending up in a totally different menu) the timer is set to 500ms, ignoring all other blink events during this time.
    To deal with rapidly occuring false positives, one could add a counter to the initial state and abort the selection process if too many events occured during this timespan.
    readmind-00
  3. Selected state: The initial state elapsed without cancelling the selection process. The timer is then set to 3000ms and the currently selected menu item is highlighted with a red border.
    If there is another blink event during this timespan, the menu item is activated (meaning an action is triggered or a sub-menu is entered). Otherwise the timer is set back to 700ms, which places the system back into  the regular state and will advance the menu to the next item.readmind-02

Next steps

There’s still a little TODO marker in the drawing section for displaying the keyboard in a grid instead of a (much too large) list like the other menus.

Other than that, I’ll need to figure out a way to limit false-positive recognitions of blink events on the software side of things in order to get this to a state where the menu inputs are fast and reliable enough to control text input or movements for one of my robots. At the current rate and with all the false-positives the only thing that would stop a robot controlled by this setup from bumping into walls all the time are the ultrasonic sensors I’ve attached.

Another goal is to turn this into a USB-HID and getting rid off the huge delays for simulating keypresses or menu actions. The FFT is done in the headset component and the receiver side offers RX/TX lines for serial communications with a development board. I didn’t yet find specific schematics for the receiver though and I don’t really want to break it in the process (apparently some solder lines need to be cut to get a clear serial communication going).

 


Like this post? Follow me on Twitter or get in touch!

Accessing the HTTP message body (e.g. POST data) in node.js

A pretty common task for every web application is handling user input. While frameworks like express or connect provide convenient methods to access this data there seemed to be some confusion and the lack of a stripped down example for pure node.js HTTP server approaches. This post is intended to fill that gap, thoughts or an understandable but yet more minimalistic approach they can put in the API documentation are very welcome.

Full solutions

Update (12/07): This article has been getting a lot of link-love lately. For those of you who are looking for a full solution on handling and parsing forms in Node.js might be interested in:

The rest that follows, the original article, explains what those modules are doing under the hood.

Server and routes setup

For our example we need a HTTP server instance with a request handler that defines two routes:

  • ‘/’ A simple web page with a HTML form.
  • ‘/formhandler’ which accepts POST requests and handles the body that is generated by the form

All other requests (such as GET requests for favicon.ico) will be answered by a 404 Not found message.

To get started we use the following code which handles our two routes by sending back a 501 Not implemented response.

If you’re running the code locally you should now be able to visit http://localhost:8080/ in your browser.

A simple form

In the next step we change the browser for the start URL ‘/’ to present the user with a little web form by exchanging that part of the switch statement with a 200 OK response together with the appropriate HTML:

The ‘/formhandler’ part remains unchanged for now. If you run the script and visit the page in your browser you’ll be presented with a form asking for name and age.

Notice that the enctype parameter of the form is set to application/x-www-form-urlencoded. This specifies how the data is encoded and sent by the browser. URL encoding is okay for simple forms without file upload capabilities for example. What’s even better is that node.js provides a built-in module to parse this type of data. We’ll use it in a later step. The other parameters specify that by submitting the form the data should be sent to the ‘/formhandler’ route inside a POST request.

Read the message body

In a HTTP request we’re only looking for the header part that tells us what the browser is requesting. The body, that may contain everything from the two key/value combinations we use in the example up to multiple files, is seperated from the header part by two line breaks. What follows is mostly either URL encoded data or multipart/form-data – which is basically data sent in distinguishable small chunks.

In node.js the request event we’re already handling is emitted when the header is fully received and parsed. This is at a time where the client’s browser may still be sending body data, if we want to use it or not. The only route we set up that has to handle this data on our side is the ‘/formhandler’ route (and even then it should only use the data if it is inside a POST request). We exchange our previous 501 code for this route with the following code:

Multiple things are introduced here. First we check the HTTP headers if the request is a POST request (req.method). If that’s not the case (say because the user went to http://localhost:8080/formhandler without using the submit button) a simple 405 Method not supported error page is generated.

If the headers indicate that it’s really a POST request we start listening for two events of the request object:

  • data: Is triggered whenever body data comes in at the TCP socket level. Note that this doesn’t neessarily contain all the data, hence it’s called a chunk.
  • end: Is triggered when the HTTP request is completed. This indicates that all body data associated with it was read and the appropriate data events have been triggered.

Above code outputs every chunk of data it receives to the console (after converting it from a Buffer to String) and sends an empty 200 OK response to the client when the request is completely received.

Buffer and decode

So we learned that the body data of our HTTP request is encoded in someway, which means that we’ll have to decode it, as well as the fact that it may come in chunks of unknown size. The latter means that we will have to buffer each chunk of data until we have the data we need to decode it.

For the sake of simplicity we will do this by converting every chunk that comes in at a data event to a string variable and buffer that until the request is completed:

Note that the variable fullBody which contains all of the data is declared and initialized outside of the event handlers for data and end. This is essential because both need to have access to it.

Running this code will throw an error if you forgot to require the querystring module needed to decode the body data or the utils module which is just used to inspect() the decoded object. After adding those require() calls to the top of the code running the example will present the user with a JSON structure containing the username and userage keys and the data that has been entered into the form.

Okay that’s just weird, I want my $_POST

Granted, this blog post is intended to be really introductory and extensive for those who are just getting started with node.js.

If that’s inconvenient you should go with one of the previously mentioned frameworks that will do most of the work for you (e.g. the bodyDecoder middleware for connect).

What if I want file uploads & stuff?

If you have more complex or larger data structures in your HTTP body you might want to check the HTTP headers which encoding type is sent or process single key/value combinations as they get streamed in. If that’s the case and you are looking for a more convenient alternative you should check the body decoder middleware or module of one of the web server frameworks available in node.js or a standalone module that parses it.

For more details…

…check out the API docs at nodejs.org and join #Node.js on freenode or the mailing list if you get stuck. The full example code can be found in the gist.

How Twitter finally became useful…

…or: Things you might not have yet noticed in #NewTwitter.

Twitter Button

Twitter

My primary twitter account was finally affected by Twitter’s rollout of their new interface – codenamed phoenix based on some filenames - yesterday. It seems like the interface already experienced some minor bugfixes in regards to usability since the rollout first started. The tech behind this new frontend has already been covered on their engineering blog.

Frontpage revamped

Besides the obvious design changes the frontpage of twitter suddenly becomes useful, at least for me. A quick overfew on the five latest followers provides a way to quickly decide wheter I want to follow them back or report that rarely dressed girl for spam.

It also features some filtered views on your timeline that shows retweets, mentions and provides quick access to saved searches. I really like that they integrated this set of features, especially because I was never a big fan of the column views that were offered by most desktop clients.

Pictures!

Your feed now contains little pictograms indicating the type of embedded content within some tweets. Pictures and other content from popular sites and media partners are now embedded directly in the view for single tweets – you’ll see these embeds when you click a tweet in your timeline or visit the URI for a single tweet.

Alas, you don’t get these embedded media when you embed single tweets with their tool Blackbirg Pie – which looks a bit antique now – but I assume they’ll change that after finishing the rollout.

Spotlight on: A tweet!

Viewing singular tweets by visiting their permalink gives you the previously mentioned embeds. Click a link in your timeline and you’ll get even more useful meta-information on the tweet as well as the accounts and hashtags it contains.

You’ll see other tweets from the original author, which might give you some context on what he/she wants to say with that funny picture in the tweet. It might also be interesting what that conference hashtag is all about or who else was mentioned in the tweet so you can easily decide if you want to follow those people. All this meta-information adds a whole new dimension of discovery (of topics, people & places) to browsing your timeline.

@reply / mention – It’s in a box!

When you mention a user or reply on a tweet it’s opened in a floating, resizable box which looks like the dialog boxes from jQueryUI. Those function previously redirected the user to the starting page.

Why this is a great change? Because the rest of the interface stays usable. You can browse to another persons profile, a search or any other view within the system to gather information you need to put those nasty 140 characters together. All without changing browser windows or tabs.

@replys? conversations!

The killer feature I see in the new twitter interface is that it finally turns @replys into useful and quickly comprehensible conversations. Clicking a tweet with the little chat-bubble symbol will now show the tweet in question, as well as the tweet it’s replying to.

When clicking an original piece of content, replies are now shown – even if you don’t follow the replying user. (Will we see a new form of reply-spam for influential users now?)

The earth keeps spinning

The old interface didn’t really emphasize location data within the Twitter platform. The new interface has it tightly integrated in singular tweet views and offers tools like “View Tweets at this place”. This makes their whole places database more useful for regular users – although it seems to lack mechanisms to “follow places” like you’d do with lists.

Anything I missed?

This article sure doesn’t cover all new / better integrated features of #NewTwitter – it’s what I find most useful at first glance. Did I miss anything really important? I’d love to hear your opinion. Let me know – here or directly on Twitter.

node-abbrev Snippet for user-friendly commandlines (like Text::Abbreviate for Perl)

I really like user-friendly command-line tools so I put together a small snippet for my node.js scripts. The result was put into a module, so it can be easily require()d and reused. I blame the lack of compliance with CommonJS naming standards on the quick-and-dirty nature of the script. ;-)

An example use case could be the start mode parameter of a script. A daemon that accepts the startup modes “help”, “start”, “stop” or “status” as the first and only parameter could look like this:

Example code (na-sample.js)

Usage

After providing the module with a list of words that should be checked in the constructor, the expand() function can be called to get a list of words that match or start with the term that was provided.

Module code (node-abbrev.js)

The complete gist with example and module code can be found here.

UnFUG Lightning Talks (node.js, sshm rewrite, pygame)

Before presenting my B.Sc. thesis I decided to join the guys of UnFUG in a round of 5-minute lightning talks called “Pimp my x86″. I did a quick writeup on node.js but as there was enough time, everybody went on to show off their recent projects and scripts. As my thesis is currently taking up most of my time I could only share the following scripts, one that I use on a regular basis and one I only did to familiarize myself with the pygame library for Python.

node.js

Sadly the beamer showed only little contrast but I think I made a point showing the great development node.js is currently going through. It’s a nice alternative approach to scalable networking systems for real-time applications.

For information on the library visit nodejs.org and take a look at the great example applications like this one.

sshm rewrite with bash-completion

A quick script I hacked together because the default sshm implementation could not handle parameters, port parameters other than the default and it didn’t come with bash-completion. The code is far from optimal but it works. It mostly follows execution conventions that the traditional sshm proposed but the file format is incompatible.
The python file goes somewhere in $PATH, while the shell script should be put in /etc/bash_completion.d/ (or similar on your system).

Controlling your Ubuntu / Linux Mint desktop with a joystick (via pygame)

The pygame library for Python offers modules for joystick support and I had some time (obviously) on my hands so I hacked together a set of scripts to control my desktop machine with a gamepad / joystick. With 4 buttons and 2 axis I could give it some pretty neat functionality, even if I’m pretty sure I will never use it again. ;-)

The Python script starts a configurable process for each combination of button-click and axis-movements. For my tests I put together a shell script that uses wmctrl to navigate through my virtual desktops and added simulated keystrokes (with xte) to control my screen and irssi in a terminal.

Spice up Google Wave

Having done some collaboration via Google Wave lately I’d like to share a few useful plugins and bots. This list here is nowhere near complete but it’s stuff I found most useful.

Polls

When discussing parts text, graphics or other content elements many decisions can be accelerated by holding a quick poll. The default poll extension offers a choice between Yes/No/Maybe. There are other polls with customizable choices for those who seek a more advanced poll. I installed Yes/No/Maybe/+ because it mimics the design of the default extension.

Deadlines

The deadline extension shows a countdown to a specified date and time inside your wave. I have previously used this to track and limit my work inside a wave but it could come in handy for group edits, too.

Why not insert it into your wave the next time you use it to do a brain-storming? It can avoid endless discussions and help to focus on the task ahead.

Like / Dislike

When a poll simply isn’t enough or there are more complex decisions to make the Like / Dislike extension can be quite useful. The wave in the image below uses two of it to decide between two versions of a paragraph.

Syntax Highlighting

Being a developer I was searching for a way to add syntax highlighting to discuss code on Google Wave. Luckily the guys at Zen and the Art of Programming shared a bot to do just this.

For those who are not yet convinced, Gina Trapani and Adam Pash have compiled a nice (and more extensive) list in The Complete Guide To Google Wave.

Sometimes even the most productive developers need a time-out, I can really recommend the competitive game extensions for Sudoku and Chess. But they can get really addictive. ;-)

Semester projects IN / HFU

I’ve just returned from my B.Sc. presentation on applied text-mining techniques for stock market predictions. Each semester when the thesis presentations are held, lower semesters present their one-term projects. Each student at my faculty has to do two of those during his/her studies.

There were some pretty neat projects. Alas I haven’t had a chance to talk to all of them you can check out the full list (in German) over at the faculty website.

Distributed, fail-safe block devices

The guys researching distributed, fail-safe block devices did a great evaluation of current technologies and compiled a nice demonstration with three datastore nodes running virtual machine instances.

TeachRobot control

Some of the code for the “TeachRobot” robotic arm interface project is available over at 32leaves.net. I guess they’ll submit their work to Hack a Day soon. Meanwhile check out Christians Logic Analyzer project if you’re in lack of an oscilloscope.

SoapBubble Bot from 32leaves on Vimeo.

The goal of this semesters project, was to build a new interface for existing legacy hardware (actually 30 years old legacy). To demonstrate what we did, we came up with that little demo: making soap bubbles with a robot.

Lecture Podcasts

I really like the idea of multimedial learning in a way that’s up to date with current technologies. The guys who developed a neat lecture podcasting system presented quiet performant streaming and a neat, SilverLight based, web interface. Big plus was the video of my UnFUG lightning talk I gave yesterday to get in shape for my thesis presentation. ;-)

Others

Those were only three picks, even the first-semester project developing a touch-based information system for the faculty showed a decent progress this semester. Great work.

blog.frankgrimm.net