Vizlator is a mobile app that allows a user to take a picture or choose a picture from the user's own gallery to get the text from the image, or even have the object identified from the picture and get the words best describing it.
Research
Style guide
Wireframes
Design System
Branding & Logo
2 Weeks
July 2020
Figma
Sketch
Procreate
Flask API
Developer:
Hannah Joo
Sharon Cheung
UX Designer:
Jeongmin Sitzes
The rise in globalization and technology has made the world a smaller place. It has allowed people to have contact with foreign languages in everyday life experience in multiple ways including digital entertainment, food, travel or work with diverse people from all over the world. Even without being a foreign language master, we can improve our foreign culture and language experience by understanding a few words and little context. However, a problem quickly arises before you receive the translation: how can we input the foreign words for searching if we don’t know how to either pronounce or type them?
As a result, we came up with Vizlator: a photo translation iOS app that allows a user to get translated words instantly by taking a photo of text or the objects identified best by the app. With Vizlator, users don’t need to handle typing difficulties or to worry about misspelling the words that can lead to incorrectly translated words or other issues because this app will identify words for them automatically. This 2-weeks capstone project was focused on developing a machine learning app and was a collaboration with two developers from Ada Developers Academy.
Our multicultural and multilingual world is at a place where it is important to have clear and efficient communication between languages and cultures everyday. Sometimes by understanding only a few words, you will be able to reach more people than you ever realized and connect with them on a more effective level, or to get the full experience reflecting the values and beliefs of a culture.
Coded Demo Presented at Ada Developers Academy
Ideally, I would have done user interviews in person, but due to time constraints, this was beyond the scope of the project. Instead, I did secondary research: searching users’ needs and challenges that could cause translation difficulties by reading through articles and summarizing my findings in one place.
From the data I collected, I found that the translation process is harder and more complicated when people translate the words in languages with different writing systems. For example, people whose mother language is English will usually struggle to search for words in Asian language scripts (Chinese) compared with other Latin alphabet based languages (French).
Either through smart devices or traditional ways (written sources like dictionary or book), it’s hard for users to initiate the search because of the search environment like technical limits (no target language installation in keyboard settings), or lack of language character knowledge and therefore, users don’t know how to type words in target languages.
This can happen as a sequence of wrong text input: Users might misspell the foreign words while typing and as a result, they can get a different translation from the correct translation they are looking for.
No external interaction can make the learning process tedious and boring, and finally result in decreasing users’ self motivation for language learning
Either asking for help from to type the language or looking for how to type will take lots of users’ effort and time and it can result in their giving up. (e.g. downloading/installing the language keyboards)
So what if you no longer need to worry about how to type or pronounce the foreign words which are not even sure whether they are spelled correctly? By removing the barrier of typing a foreign language, we begin to see the power of Vizlator. The visual translation feature in Vizlator lowers the barrier of foreign language word searching by letting your smart devices catch the word for you with a few snaps in order to get an instant translation. All you need to do is just point your smart device camera at certain objects or text written in a foreign language.
Before building anything, we identified what specific tasks or goals users would want, and then we prioritized user stories to further chisel down our target MVP.
01. I want to search foreign words without typing or asking others
02. I want to sign up for an account or log in easily without complicated sign up
03. I want to learn how the translated sentences / words sounds
04. I want to bookmark the words I found so that I can access them anytime
We decided on what the most important features for the Minimum Viable Product to be the following:
01. Simplified sign up / sign in
02. Auto-detective photo translating
03. Selecting target languages
04. Bookmarking words
Our goal was to design a simple, fun and friendly learning experience that would empower app users to learn autonomously through a few snaps. After analyzing the user stories and MVP features, I decided to design the following screens:
01. User sign up/sign in
02.Text / object translation
03. Language setting
04. Language library
Wireframes show the entire task flow from the on-boarding process. After users sign up or sign in by using one of the external accounts, users will enter the home screen which leads to two separate translation controls for translating words and images. We also explored two different language setting features: an individual language settings page and the language change buttons in the translating page. When the user finishes the translation process, he/she can bookmark the words in their app library so that they can easily look up anytime.
including English, Spanish, Chinese, Korean, Japanese, German, French, and Vietnamese.
My approach to UI is to develop cheerful visual elements and color palettes that create positive elements by concentrating on simplicity, consistency, and reusability. The choice of colors was also meant to elicit friendly tones. I tried to emphasize contrast as a means of thinking about accessibility by using blackish gray and white with an accent color of gradient purple. Also, we used Raleway as a primary font to provide users a unique but friendly feeling while they use the app.
We wanted a product name that emphasized its function that translates the visual objects of image and text. Therefore, we came up with “Vizlator” which is a compound word of “Visual” and “Translator”. With the same context, I created the microphone, which is a typical symbol of translation/translator with an image icon inside for the logo.
Users can sign up / sign in easily by using their email or external accounts. If it’s the first time for users to use the app, they will select the default language which would be set as an output language for the “photo translation”, and as an input language for the “text translation”.
Users can either select the photos from the gallery or take a photo to translate the image. Default language that users set while signing up or change in language settings manually will be used for the detective language for text translation / translated language for object translation. Moreover, users can listen to words by clicking the speaker button next to the input/ output text box.
After users complete the translation process, they can save words in their library with a favorite button so that they can visit what they searched for in the past. Also, they can add more translations of the saved words in other languages so that they can compare among multiple language translations of the target word with no hassle if they want to search the same word again.
*Planning for the future:
Language and time filter, and history function can be added in the future so that users can easily access what they need according to their purposes.
-Ada Developers Academy-
Key Features
Languages
Audiences
Weeks
We successfully developed thwithin the given 2 weeks and presented it for a 52 person audience including Ada Developer Academy instructors, student developers and mentors.
Through this project, I built skills collaborating with front and back-end engineers, and learned about creating a bridge between designers and engineers for product management. I’m excited to have participated in this full-stack capstone project as a UX designer!
This app project was scoped to a limited timeline so we had to focus our efforts on minimum features to make a final product. For the next step, we would like to:
01. Give users detailed favorites pages and let them jot down notes.
02. Copy/paste feature for future use
03. Create a “drawing pad” version to let user doodle objects (New feature)
04. Create additional language settings for the app