Building Voice-First Android Apps

EN

Thanks to the latest advancements in Machine Learning, we're now capable of interacting with machines through natural language. The age of voice assistants is here with Alexa, the Google Assistant and others. But, as an Android developer, what can I do on my existing app in relation to conversational features?

When we think about developing features that are voice-forward, we think about existing voice assistants such as Alexa and the Google Assistant. What about the fully-capable computers that we have with us all the time, our smartphones? Some moments on our day to day lives are very well suited for voice interactions: while in a car or cooking for example. Let's not forget that voice interactions are extremely accessible, not only in a physical way (for people with dexterity or motion impediments) but also in a cognitive way (I think we all have a loved one in our lives that really struggles with technology, and people from some emerging countries have very limited access to computers and are not at ease with technology).

In this talk, I'll explain what integrations can be done on Android:

  • 1st-party solutions such as the SpeechRecognizer and TextToSpeech APIs
  • Other Google solutions such as ML Kit, TensorFlow and Dialogflow
  • 3rd-party solutions such as Porcupine, Snips, Amazon Lex, Snowboy and PocketSphinx
  • Integration with the Google Assistant via App Actions

Elaine Dias Batista photo
SFEIR

Elaine Dias Batista

Paris, France

Elaine works at SFEIR on mobile and voice technologies projects. She loves to share with the community her experiences about her favorite topics: Android, Google Assistant and Flutter.

Twitter: @elainedbatista