Description: Use a "voice print" to distinguish individual users. This will be useful in situations where multiple individuals use the same Mycroft system, e.g. a family situation. Later this can be used to pick the correct profile for personalized Skills like a calendar or music playlists.
Discussion: This approach is to have each user record several voice samples of the Wake up Word. Then using Tensorflow and Tflearn, returning either the user who is thought to match or None (legitimate if spoken by a guest)
Current State: Proof of concept as a skill has been built around Tensorflow and TFlearn, but is not in a working state yet. The current code needs to be altered to train a model based on saved wake word data. Then use this trained model to determine if Mycroft recognizes the current speaker. Recognition currently works but only with the sample data provided by the Tensorflow codebase being used. Eventually this will probably best implemented within the core.
Challenges Pi hardware has gotten pretty powerful, but doing this with Tensorflow may be pushing it with Mycroft already running. Once a working proof of concept is completed hopefully testing can be done to optimize the training process enough to work on a pi.
Feature Issue: started
Team: TREE-edu