Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

           

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Voca

For people on-the-go, Voca lets them send text messages by voice, eliminating the need to type.

 
 

The Problem

Texting by hand is often tedious and mistake prone, especially when in a hurry. Why can't texting be as easy as talking? We wanted to create a product that allowed the composing and editing of messages by voice to be intuitive.

Mobile App | View on Playstore


Phase 1

Initial phase to develop a minimum viable product that accurately sent messages by voice.

Phase 2

In phase 2 we implemented user feedback from phase 1 to refine the product. We also introduced features for a new kind of user: the driver.


the Users - Phase 1

We decided the target users for Phase 1 were busy people who do not want to spend time typing out messages when they could dictate them faster. This scenario fit a lot of people, but we focused on the busy mom who could message her friends while taking care of a child, and the college student who could answer his girlfriend's text while biking to class.

MY ROLE 

I have been the Product Designer throughout the project (from idea conception to the present), working directly with the Product Manager. As the Product Designer, I began the UX process by first exploring who the target users would be. After defining the users, I created user flows before jumping into the interface design. For the UI, I started with basic sketches of the screens and created interactive mockups of the sketches. The best sketches I further developed in the prototyping tool Sketch, and then finally implemented my designs into Android XML.

The project was constrained by time and resources. We only a three month window to create a minimum viable product for Phase 1, so that we could begin collecting user feedback for Phase 2. Thus it impacted the time I could spend researching users.


User Flow - Phase 1

The most important actions were to be able to choose a contact, compose a message and then send the message. Although these were the critical actions, it was also important not to lose sight of the "little" steps in between. 


Design Process - Phase 1

Since the three main steps comprised of: 1) selecting a contact, 2) composing the message, and 3) sending the message, the screens we focused on developing first were the contact screen and the dictation screen. I started creating some initial sketches of these two screens. I choose the "most promising" sketches to create simple prototypes (see example here), in order to explore the interaction between the two screens.


Challenge: Contact Screen

The contact screen was the simpler of the two screens, so we started there. We realized that it was more complex than originally anticipated. For instance, how would the avatar look with and without a picture? How do the new messages look on the contact screen? What information is necessary for the user to have on this screen?

initial contact screen sketches

initial contact screen sketches

Finished contact screen with the newest messages coming in on top. (right) We experimented with having new messages come in on bottom; I initially thought it would be easier to reach the new messages with one hand, but it confused users so we reverted to the standard order of having the most recent on top.

Finished contact screen. 

Finished contact screen. 


Challenge: Dictation Screen

Developing the dictation screen took a lot more work than the contact screen, as there were a number of things to consider. For example, how does the user turn on and off the microphone? How does the user know when the microphone is on or off? How does the user intuitively edit their message? Finally, how does the user send the message?

In the early draft of the dictation screen, users did not understand that the animation behind the the texting area meant the microphone was on and listening. There was another problem as well--what if the user was in a quiet spot and could not dictate their message aloud? Or, what if they were around people that they did not want them hearing their messages? 

Early sketches of the dictation screen

Early sketches of the dictation screen

a few early animation sketches

a few early animation sketches

Early dictation screen draft

Early dictation screen draft


To solve both problems we connected the animation more directly to the microphone with elements showing in front and behind the texting screen. We also added the capabilities of typing your message and changed the prompt to from "say message" to "say message or tap here". The tapping action then brings up the keyboard and turns off the microphone.

As we continued to iterate our design, we next realized that we needed a way to see the message thread. This affected the dictation screen because we did not want the user to take an extra step to view their message thread (i.e. go from a) contact screen, to b) message thread, to c) dictation screen), so we decided to implement the message thread into the dictation screen.


challenge: 

Previous Messages

My initial design had the message thread in a drawer to the right, so that it was easily available by swiping left from the dictation screen, but this design was not direct enough. The last message needed to be right there on the page without requiring the user to do an extra swipe to view the message thread.

Early sketch of deciding how to implement previous messages with the dictation screen.

Early sketch of deciding how to implement previous messages with the dictation screen.

message thread available by swiping left

message thread available by swiping left


To give the message thread greater visibility, we moved them to the top of the screen where the last few words of the conversation are always in sight, with the full conversation available by swiping down. 


As we continued to develop the app, we realized one of the key user functionalities was to make editing texts by voice more intuitive.  Some of the commands that were intuitive were: "Keep the last part," or "Delete the second part", etc. Since people do not naturally "spit out" their entire texts in one breath, we had to find a way to visually show each separate utterance for the editing process to work properly. In order to show the user how their phrases were understood, I selected a number of colors that would highlight the text without making it unreadable.


the Users - Phase 2

The target users for Phase 2 were people that want to answer messages while On-the-Go, literally. We focused on truck drivers who are forced to spend long hours on the road, as well as busy professionals who have a long daily commute and want to put that time to better use. 

User Flow - Phase 2

The main challenge going from Phase 1 to Phase 2 was the user flow. Answering texts while driving called for a much simpler flow, one that did not require the user to squint at tiny font to read their incoming messages, or click five buttons before they could reply. In order to solve this problem we created another screen, one that would play incoming messages out loud, with a large button for responding easily. 


Design Process - Phase 2

The play message screen is simple on purpose. This is a screen that most likely will never be used exept for driving, so it has a minimum number of options with extra large buttons that are easy to tap without requiring too much attention. 

Initial sketch of new screen that would read incoming messages. 

Initial sketch of new screen that would read incoming messages. 


Current draft of incoming messages screen. Buttons are large to be easy to hit while driving. 

Current draft of incoming messages screen. Buttons are large to be easy to hit while driving. 

I experimented with other ways to allow the message to be played, but both the dictation screen and message thread had too much information already. Plus, we did not want to ruin the simple user experience for the users that were using the app only for its basic feature (texting by voice).


Other Changes: Motion Sensors

In order to make the contacts screen and dictation screen more appropriate for driving, the screens had to be responsive to when a person is driving. We developed Voca to detect the driving motion, which enlarges texts to make it easier to read while On-the-Go.

Voca detects driving motion and automatically makes text much larger.


Retrospective

Voca's user base is varied, as well as their use of the product features. Some love it for driving, while others ask us to add more features suitable for driving, such as read-back before sending. 

Looking back, one thing we could have improved on was to spend more time on the new user at the beginning of Phase 2. 

This app is great! I love that if you dont want to speak your texts at the moment you dont have to jump through firey rings with glass on the other side to switch to keyboard I made this my default messaging app mainly because I drive a truck and its safer to speak texts this app does what I need with Big buttons so I dont have to squint to send a message
— Tom Steketee, Playstore Review