Equip any toothbrush with Personalized Feedback from Google Assistant

Quick Update
  • In July 2018, out of 129 projects and 1125 participants, Google selected us to be among the top 10 semifinalist teams to represent the United States at the 2018 China-US Young Maker Final Competition in Beijing, China.

Featured in Chinese Newspaper: https://mp.weixin.qq.com/s/sK4kmk_6oLuSPOyRZiviMw

We received Honorable Mention! Pic with China’s Minister of Education

An IoT device that can be put on any toothbrushes to track how well you’re brushing and augment your brushing routine through reinforcement. Additionally, it interfaces well with the Google Assistant to provide personalized feedback; thus, improving your oral care goals! Various sensors track the angle, speed, and technique of brushing to verify each region receives the proper treatment. This device will mandate the proper time and technique is spent on each region. Our device interacts with the user through the Google Assistant API. Once prompted, the Google Assistant voice will prompt the user to begin brushing a certain region. Feedback is periodically given to make sure the user maintains proper speed and angle. After one region has been satisfactorily cleaned, the voice will prompt which region to address next. To encourage a consistent brushing regimen throughout a family, it keeps track of the quality of each brushing session. Consistent and masterful brushing will be rewarded in a point system that is tracked on a per-family basis. Instead of a mother needing to enforce a daily brushing regimen among her children, the siblings will compete amongst themselves to get the highest overall score. While the mother can reward the siblings for high scores and improvement, the points can also be redeemed for dental accessories. Amassing enough points can greatly reduce the cost of new toothbrushes, toothpaste, and floss. Proper dental care reported to dental services has the potential to reduce dental insurance costs, as it will overall reduce costs incurred for bi-yearly checkups. This prototype consists of a Photon device, which can transmit sensor data wirelessly to the base (with Google Assistant built in). The Raspberry Pi in the base charges it, as well as classifies the sensor data to specific regions of the mouth, with accuracy and speed readings as well. Progress is analyzed by the base which communicates with the Google Assistant API to provide feedback. We aim to continue improving our model accuracy and add a user-friendly front-end (e.g., mobile app, web) for users (e.g., family members) to track their progress! An Android and iOS app can be developed to visualize the regions of the mouth being prompted by the base, and can be used to monitor progress, provide statistics, and redeem points from the point system. If taken to fruition, it will be a valuable inclusion to the daily brushing routine in all households.

We built the Raspberry Pi version of the Google Assistant from scratch.

How we built it

Hardware components

Particle Photon
SparkFun Photon IMU Shield
Raspberry Pi 3 Model B	
Speaker and Microphone that work with Raspberry Pi 
Acrylic sheet (for making the case)
Clean toothbrushes!


Google AIY Voice Google Assistant SDK The GRT library with various Machine Learning tools https://github.com/nickgillian/grt Open Sound Control (OSC) protocol to transmit data among devices across the network


Dynamic Time Warping (DTW)

From Nick Gillian “DTW is a powerful classifier that works well for recognizing temporal gestures. Temporal gestures can be defined as a cohesive sequence of movements that occur over a variable time period. The DTW algorithm is a supervised learning algorithm that can be used to classify any type of N-dimensional, temporal signal. The DTW algorithm works by creating a template time series for each gesture that needs to be recognized, and then warping the realtime signals to each of the templates to find the best match. The DTW algorithm also computes rejection thresholds that enable the algorithm to automatically reject sensor values that are not the K gestures the algorithm has been trained to recognized (without being explicitly told during the prediction phase if a gesture is, or is not, being performed).” If you’re curious in learning more details, check out his research paper: http://www.nickgillian.com/papers/Gillian_NDDTW.pdf



  • After plugging in the USB speaker and microphone, you can check the device name and card using
arecord -l

Create a new file named .asoundrc in the home directory (/home/pi)

pcm.!default {
  type asym
  capture.pcm "mic"
  playback.pcm "speaker"
pcm.mic {
  type plug
  slave {
    pcm "hw:,"
pcm.speaker {
  type plug
  slave {
    pcm "hw:,"
  • This process took us a couple hours to troubleshoot the drivers and hardware. If your speaker or microphone somehow still don’t work, “forcefully” set them as defaults by:
sudo vim /usr/share/alsa/alsa.conf

#defaults.ctl.card 0
defaults.ctl.card 1
#defaults.pcm.card 0
defaults.pcm.card 1
  • To test your speaker
speaker-test -t wav
  • To test recording an audio on the Rasberry Pi and microphone
arecord --format=S16_LE --duration=5 --rate=16000 --file-type=raw out.raw




Raspberry Pi base with the integrated Google Assistant to process data using ML.

Device can be easily attached to most toothbrushes!

Github: https://github.com/megatran/personalized_toothbrush_assistant