Shyamala Prayaga: Enriching the Aspects of Experience

Shyamala Prayaga | Senior Product Manager | NVIDIA
Shyamala Prayaga | Senior Product Manager | NVIDIA

A positive product experience fulfilling the necessities is one reason, among many others, why the customer opts for a specific service from the plethora of modern market competition. Intuitive experience, coherent design, and user-focused accessibility all of these add to why a user chooses to interact with a service from a business.

Recognizing the societal and cultural benefits of customer satisfaction by utilizing accommodating user interfaces, Shyamala Prayaga, Senior Software Product Manager at NVIDIA—a self-driven evangelist for UX and voice technology—offers leading-edge services that enhance usability.

She holds insights and knowledge with an experience and leadership from her technical development and design roles. Her design and research work are presented nationally and internationally to general and field-specific audiences. She has 18 years of experience designing mobile, web, desktop, and smart TV interfaces.

In addition, Shyamala has more than six years of experience designing voice interfaces for Connected Home Experiences, Automotive and Wearables.

CIOLook had the privilege to interview Shyamala, where she discussed her vital contribution to the industry of UX and voice tech.

Below are the highlights of the interview:

Shyamala, enlighten our readers about your professional tenure so far in the industry and shed some light on your novelties that are enhancing UX and Voice technology.

As Senior Software Product Manager of Conversational AI: Deep Learning, I drive NVIDIA’s Speech AI GUI product suites by owning the roadmap and working with cross-functional teams to realize it. I previously worked for Ford Motor Company as a Product Owner Digital Assistant, leading the roadmap for voice and chatbot innovation across traditional and autonomous vehicles. My product, the SYNC 4 Digital Assistant, an intelligent, voice-activated, in-vehicle assistant, was introduced in Ford’s F-150 and Mustang Mach-e models in 2019.

Prior to Ford, I worked with Amazon and Voicebox Technologies to lead user experiences for voice applications. I was part of the Alexa Gen 1 (a voice-activated smart speaker) launch in 2014, which became an overnight sensation. My career began 22 years ago as a User Experience (UX) designer designing mobile, web, smart TV, and desktop applications. Citibank, Toyota Shopping Tool, and VidyoMobile were among the first mobile applications I designed. Over the past decade, I have designed voice interfaces for automotive, wearables, and connected homes.

 

 

Tell us more about yourself, highlighting the exceptional skillset that makes you one of the most impressive tech leaders who are enabling advancements in the modern industry.

I have always been fascinated by Artificial Intelligence, even during my undergraduate years. Over the course of my career, I have always focused on creating user experiences for cutting-edge technologies. My journey with Conversation AI started a decade ago during an Amazon hackathon when I prototyped an augmented reality application called the ‘Junglee Shopping Tool.’

There are still many people who don’t feel comfortable buying things online because they want to try them out first. Using emerging technologies like augmented reality and voice, I envisioned enabling e-trials to encourage online shopping. As a result of the proof of concept, I won the ‘people’s choice award.’ Later, the concept was introduced by many leading retailers in their online stores, including Amazon.

I worked with the Department of Transportation to fund an ‘Omnichannel Digital Assistant’ research project to enable people with disabilities to easily access autonomous vehicles. My idea was to leverage conversational AI technologies such as Automated Speech Recognition, Text-to-Speech, Natural Language Processing, Automated Sign Language Detection, and Voice Biometrics, as well as adjustable touch screens with tactile surfaces to provide maximum utility and control to the disabled in the self-driving car.

It is my belief that everyone should be able to access products, regardless of their age, abilities, or circumstances. My opinion is that self-driving cars will change the number one rule for driving: The vehicle needs an able driver who is licensed and fully qualified. Vehicles that can operate themselves will not need a licensed driver. Anyone who can enter an autonomous vehicle and give it a destination can use it.

I introduced the five pillars to empower inclusivity in autonomous vehicles: 1) Trust, 2) Independence, 3) Understanding, 4) Recognition, and 5) Response. I believe that ‘when a vehicle becomes the driver, the voice becomes the companion.’ Omnichannel Digital Assistants can be game changers not only in automotive but kiosks and retail setups. All it needs is proper orchestration.

Voice interfaces have made many advances and are the most natural form of interaction, but there are still challenges and gaps in language, understanding, and recognition. As a result, the products have limited utility and trustworthiness. I recently published a book called ‘Emotionally Engaged Digital Assistant: Humanizing Design and Technology’.

The book lays out several key frameworks for implementing voice interfaces that build trust. Additionally, I introduced six principles of emotional engagement: empathy, ease, transparency, relationship, confidence, and delight. In the book, I discuss how technology and design can be humanized through the use of the six principles and the framework.

Looking at the recent improvements in conversational AI and Deep Learning, share your valuable opinion on how these emerging technologies ensure reliability in modern-day operations.

We have always imagined a voice assistant we can talk to, and the rise of voice-enabled assistants is proof of that dream. With advancements in conversational AI and deep learning, it has become possible to talk to these voice-activated assistants and automate repetitive tasks. A survey of over a hundred people I interviewed for my book ‘Emotionally Engaged Digital Assistant – Humanizing Design and Technology’ found that almost everyone owns at least four voice-enabled devices and uses them to set alarms, schedules, and reminders.

Speech is the most natural form of communication for humans. In contrast to technology use, speech does not require any learning curve. These voice assistants are becoming easier to communicate with advancements in AI and deep learning. As AI advances, we can now converse with these assistants in a natural and conversational way instead of relying on ruled interfaces.

From automotive applications to customer service use cases, conversational AI technology is becoming increasingly useful. In automotive, it allows customers to get directions, play music, and get information about nearby points of interest, reducing the distractions that come with using smartphones behind the wheel. To optimize their ordering and fulfillment pipeline, retailers are experimenting with voice-activated drive-throughs.

With the advancement in text-to-speech, we can now generate high-quality emotive synthetic voices with very small data compared to days’ worth of data. In addition to helping content designers who create audio content, TTS also provides voices for people with disabilities. As a result of advancements in natural language processing, bots and humans are able to converse with each other in a high-quality manner. This is supported by large language models. Virtual assistants, IVR systems, and other applications can benefit from this content generation.

As a professional technology evangelist, tell us about the social and cultural benefits of voice-tech evolution.

During one of my podcasts, ‘The future is spoken’ episodes, I interviewed a person with cerebral palsy who relies on a wheelchair to get around. His immobility prevents him from doing many everyday tasks efficiently. As a first-time dad expecting a baby, he voice-enabled his entire home so that he could enjoy fatherhood as any other person would. To assist him in holding and putting back his baby, he built a voice-activated cradle that can adjust based on height. It is an example of the power of voice-enabled technology to assist everyone and to enable access for everyone.

From urban innovation to women’s safety, voice technology is being explored in many different contexts. Voice assistants are being used by seniors in senior living for companionship and emergency help. A study is being conducted that aims to use voice to detect Parkinson’s disease and COVID-19, which can be detected by how certain phonemes are pronounced. India’s rural areas are exploring voice technology to assist digitally illiterate people in finding information related to agriculture.

Voice technology offers numerous benefits, and these explorations are evidence that many possibilities remain unexplored.

What is your primary role at NVIDIA, and what inspires you to galvanize enrichments in the voice and UX niche?

In my role as Senior Software Product Manager of Conversational AI: Deep Learning, I own the roadmap and vision for NVIDIA’s Speech AI GUI product suite, which allows customers to customize Speech AI with self-service offerings requiring little or no code. In the conversational AI industry, customers are increasingly demanding the ability to create their own synthetic voices.

There are many possible applications, such as preserving the voice of a cancer patient or powering a metahuman’s voice. Traditionally, creating a custom voice requires hours of data and technical knowledge to train the model and create a production-grade clone. No-code and low-code platforms are lowering the bar, enabling more people to customize their voices with minimal data and coding.

Originally, I studied architecture but switched to UX because I love simplifying complex user interactions. As a child, I watched my parents struggle with technology, which inspired me to design usable products. No matter how many great features a product has, it isn’t worth the cost if it isn’t usable. It is my firm belief that UX is the soul and heart of every product. I base my designs on the premise that a fifth grader should be able to understand them. The average person wouldn’t understand if it wasn’t easy for a fifth grader.

So far, NVIDIA has been a significant game-changer in the world of technology; How is your expertise helping the company to scale its progress to greater heights, eventually?

I assert that any product’s heart and soul is its user experience. It is likely that there will be ongoing utility and more feature requests if the product is easy to use. Customers will abandon a product if it is not usable.

I design every product and feature from the user’s perspective. Every step of my design process involves extensive user research and validation with users. When a product is designed with the user in mind, it will always be usable. This facilitates not only utility but also scalability.

What would be your advice to the budding aspirants who are willing to venture into the UX niche in the near future and develop voice-assistive software?

Take part in industry events and meetups, build connections, and learn about conversational AI and UX through books, podcasts, and events. Become familiar with key concepts, tools, and processes. Make use of the skills you have learned to create capstones and sample applications. Make sure you master practical skills. You cannot gain confidence until you put it into practice, no matter how much you read and study. Get to know industry experts and what they do on a daily basis. Establish meaningful connections.

Where do you see yourself in the long run, and how are you working towards achieving your future goals in this niche?

It is still the early days for the conversational AI industry. Technology enablement and enhancement are still a long way off, as is defining unified standards and adopting them across sectors. It is my goal to bring all of these together to create an omnichannel and inclusive experience.