Deep Sketch

Deep sketch uses pix2pix to convert images to sketches. It runs inside of the browser using tensorflow.js, using the camera to capture a portrait and convert that into a sketch. The output drawings are far from perfect though.

Loading Model...

Once you allow access to the web-cam, you can click the "Sketch Picture" button and your computer should start to do some thinking. Usually takes about 10 seconds or so for a sketch to appear. They are not super-good. Part of the problem is that the training-set is smallish and not very diverse. It comes from the university of Hong Kong and can be found Here

The drawings tend to come out vaguely Asian looking. Uneven training sets leading to uneven results is of course a well known problem in Machine Learning, but being a white male, it usally works relatively okay for me, so that's somewhat interesting.

The code used here is mostly based on: https://github.com/yining1023/pix2pix_tensorflowjs