I've experimented with the same idea and managed few working examples
My thinking was...
CPU version:
1) prepared set of patterns:
- each pattern has same size (200x200px for ex.)
- pattern is collection of circles
2) while player draws you sample his strokes and scaled it to the same size as pattern images
3) compare given samples with predefined pattern -> futher from the center of a point, the less accurate (you can set for example 5px radius)
4) ... do a lot of optimalizations and thinking :)
GPU version:
1) prepared set of patterns are images with thick lines that fades from center to sides
- stronger color = more acurate to the pattern
2) player draws to texture as well
3) render player's texture with fragment shader and pattern as mask
- for example if pixel is on the mask, save it's color to x channel (higher number = closer to perfect shape)
- if not od the mask, use y channel
4) check the resulting image -> sum up X and Y channels to see how accurate it was