Summary
Protractor by Yang Li introduces a data driven gesture recognizer which is more accurate and faster than its counterparts. The recognizer first resamples the data points into equi-distant points using the procedure in $1 recognizer. However, unlike $1 method, the stroke is not rescaled into a square which keeps the aspect ratio and discriminates between long and narrow strokes. Next, the stroke is shifted so that its centroid resides in the origin. To discriminate between different rotation angles and to reduce noise, the gesture is rotated so that the angle of the initial point of the gesture is snapped into one of 4 main angles. It is also possible reset all the gesture to zero angle in which gestures are rotation invariant. In order to perform classification a distance measure between the input gesture and the template gesture is defined as the angle between their respective data vectors in n-dimensional space.The author then shows that an optimal angle can be found in a closed form which if added to the template's rotation angle, minimizes the input gesture and the template gesture's distance (in the defined sense). Finding this optimal angle compensates for the noise in determining the gesture orientation just by its initial point rotation. To classify the input a nearest neighbor method is utilized.
Finally the author shows that his method is equivalent and in some cases superior than the $1 method and with regards to computational resource required significantly outperforms it.
Discussion
This is a relatively recent article. Among the advantages of this method is that no feature is required to be defined which makes it more flexible as it is difficult to design features that can always discriminate gestures in all domains. The author also shows that it can cope with large gesture set quite well. I am not able to figure out an obvious flaw right now.