Mark


leva

4 weeks 
A mobile-first AI virtual assistant built to intimately understand their user, and then use this understanding to intelligently play music. leva is not a smart DJ, but instead is designed to be a personal confidant that the user can use as a resource for listening to music.

leva explores the effects of music on affecting the dimensions of identity, mood management, arousal and social interaction on an individual level.




Tools:
Javascript (p5.js)
Figma 
Illustrator
After Effects


Skills:
Interaction design
Conversational design
Prototyping for AI
Team motion/js file management 


Team:
Jaclyn Saik
Elena Deng
Eunice Choe
Julie Choi






Project Space –
Human mood, preferences and music are inherently interlinked. In many current music streaming platforms, there are intelligent existing algorithms that can predict your music preferences based on your previous listening habits.

But rather than approaching music selection from a genre, artist, or labeled playlist perspective, what if AI and virtual assistant technology was used to generate music based on conversation with the user? 

And specifically, how do the appearance and aesthetics of this interface contribute to the effectiveness of the product? 




How Leva collects data–
Final UI Element Kit –

UI Elements



Scenario Breakdown–
Here, our user Michelle, who only recently started using leva, wants to access music based solely on her mood: 



While in this case, Michelle is now interested in her connected friends’ moods, and uses leva to gain a better understanding through their listening data. Julian is also interested in what she’s listening to, and wants to combine tastes to make a collaborative playlist based on their shared emotions: 






Michelle now has a specific listening goal, and leva contextualizes their recommendations based on time and user history:









01– Understanding the user

We kept our research methods straightforward: we talked to users about music and mental health.


Here’s some important insights and pain points from a series of 10 interviews, conducted on students and young professionals locally:




Personas–
By consolidating our preliminary user research and affinity mapping our observations, we created 2 personas modeled after our target user. Although the focus of this project was more on the visuals and direct voice interaction with leva, we wanted a more robust description of who leva might be speaking with.




Angie

20-year-old undergraduate student, studying business in Pennsylvania



Angie lives about a mile away from campus and usually walks to and from class every day, listening to music the entire time.

Angie likes getting song recommendations from her friends, but struggles to organize all of the content she gathers into the right playlist. She’s not sure what the “right” playlist means, since her mood changes a lot. Also, she likes to have as little touch-interaction with her phone as possible while listening, since the winters are cold and she prefers a hands-free experience while walking.

Kyle

18-year-old high school senior, living in California and preparing for college. 



Kyle listens to music in his room constantly, and cares a lot about his reputation as a curator of new artists. He organizes a large amount of music into playlists for himself and his friends.


Kyle is currently seeing a therapist for depression and anxiety, and he knows that music often has a positive effect on his mood. He is looking for more support in how to use music to manage his symptoms.




02– Research: how music affects mood

Vocal assistants are nearly ubiquitous in our daily lives, but most of the interactions we have with them are limited to task management data recollection. leva was designed to be more than that: a tool that better understands users on an emotional level, and reflects that understanding back to the user (personally, I was imagining an expansion on ELIZA).

Leva’s ultimate purpose:
  • Encourage self-awareness through mood-awareness
  • Use informed understanding of user emotion, habits and history to create an environment dictated by mood
  • Focus on how you feel, not just what you want to hear

But why music? The psychological effects of music have been studied for decades, notably in a large scale survey by Lonsdale and North, 2011, through a uses-and-gratifications approach, they identified 30 musical uses that could be reduced to eight distinct dimensions: identity, positive and negative mood management, reminiscing, diversion, arousal, surveillance, and social interaction.

With Leva, we aimed to explore how music affects these dimensions on an individual level.



A resting leva, demonstrating color change



03– Motion states
After mocking up our interaction prototypes using After Effects, we moved to the javascript library p5.js to create prototypes that were responsive to sound. The final states are created not only for clarity of interaction with the user, but also to give leva character.

leva’s “personality” is defined by fluid, oscillating response to sound input, which in turn informs their functional states.







04– Empathy via color
One prominent feature we prioritized was leva’s ability to give feedback directly related to the mood of the conversation they have with the user. We finalized her color to adjust in temperature and saturation, to show empathy towards they user’s emotions.


Serious



Positive

We protyped this interaction using wizard-of-oz techniques to mimic the tone of a mock conversation with a participant. It was through this testing we understood that leva’s physical color should adjust color, rather than the environment they’re inside, as it shows a more direct connection tone.





Because we’re not using an actual voicebot, we prototyped the sentiment analysis feedback simply by adjusting the mouse’s x-position.







03– Form

leva’s form is inspired from the rounded shapes of a whole note and the organic shapes of a blooming flower.

The overlapping form of each of the organic shapes layered on top of each other is a representation of the complexity of human emotions, individual preference, the process of creating music itself.






Final thoughts –

Music and emotion are inherently tied. So are music and conversation. With AI, there’s a possibilty to combine this, and in turn promote self-awareness and provide empathy.


With this project,  I enjoyed synthesizing the skills I was pretty comfortable with, such as motion graphics prototyping typographic design systems, with skills I had never explore before. Learning to prototype a conversation, rather than simply some digital interaction or short wireframe sequence, was remarkably challenging. 

We were restricted by the timeline and limited initial understanding of p5.js, but personally I’m proud of the skills we gained in such a short sprint. I can comfortably mock up prototypes in p5, and they will be live and actively responsive to the external environment. If we could expand on this project, I would want to fully flesh out a user flow throughout the entire app leva lives inside (especially the UI elements).











Jaclyn Saik 2022