Type

Consumer Application

Format

Desktop

Area

UX Research; UX Design

Employer

Lexmark

Tools

Figma, Microsoft Copilot

Team

Product Owner, Business Technology, Data Scientists

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Type

Type

Type

Consumer Application

Consumer Application

Consumer Application

Format

Format

Format

Desktop

Desktop

Desktop

Area

Area

Area

UX Research; UX Design

UX Research; UX Design

UX Research; UX Design

Employer

Employer

Employer

Lexmark

Lexmark

Lexmark

Tools

Tools

Tools

Figma, Microsoft Copilot

Figma, Microsoft Copilot

Figma, Microsoft Copilot

Team

Team

Team

Product Owner, Business Technology, Data Scientists

Product Owner, Business Technology, Data Scientists

Product Owner, Business Technology, Data Scientists

Background

Recently, Microsoft began integrating artificial intelligence (AI) tools into their Microsoft 365 products. Being an early adopter, Lexmark was quick to begin experimenting with Copilot, a generative AI chatbot that can be used securely with a company’s own data.

We were fortunate to try some beta features of Copilot, including its ability to go beyond just answering questions about the data to actually complete tasks for users. This was an amazing opportunity to play with some of the most promising helper AI tools coming out.

Role

I was the sole designer on the project, working with a stakeholder group that consisted of a product manager, the business technology team who would be implementing it with Microsoft Dynamics, a pilot group of users from the data science group, and a consultant from Microsoft who could guide us on the capabilities of Copilot.

user

business

technology

Approach

As part of the Connected Technology team, I was asked to explore what these features could look like in our internal chatbot. I was immediately inspired by articles I’ve read over the years. In one study, researchers found that when completing complicated tasks, users felt they received more value when they were able to see the work an algorithm was doing when searching options for a flight to purchase.

Day 1

Day 2

Day 3

Day 4

Day 5

Problem definition and scoping

Competitive review

Refine idea

Design and build

Review and gather feedback

Choose what to focus on

Brainstorm solutions

Negotiate trade-offs

Create artifacts

Revise design

Present resulting solution

Sketch ideas

Resolve conflicts

Design Sprint

As a small project, I chose to work in a design sprint, or a short, focused, defined period of time of one week. This allowed me to follow a modified user-centered design process. I was the only full time resource on the project, and I was able to connect with the other stakeholder groups on an ad hoc basis to present my progress and gather feedback.

Without time or budget to conduct user research, I began with a competitive review to see what common features users would expect to use when interacting with generative AI. It became clear that current models utilize a familiar chat-based interface, allowing the users to interact like a typical conversation. This insight allowed me to focus on designs that turned AI into a conversation partner rather than a new technology.

Making It Human

Humans tend to anthropomorphize everything. We give our cars names, make movies about emotional robots, and we see faces everywhere when walking down the street. This need to apply human characteristics to inanimate objects could play an interesting role in how we design generative AI in the near future.

So how will we interact with this new technology? Will we be partners achieving goals hand-in-hand? Or will we relegate AI to being subservient helper bots? Would we even want AI to play a leading role in our lives? Although these questions were beyond the scope of my assignment, they helped me create a framework for how the Copilot features could be presented to users.

Exploration

I began exploring concepts by asking how would it affect me if AI was an all-knowing, all-powerful helper? Or what if it were just a group of capable, yet dim-witted assistants? Should it present itself as separate entities with varying skills? Ultimately I settled on a familiar metaphor - the odd couple. Two very different personalities that work together to play off their strengths like C3PO and R2D2.


This was a way to represent the different problems AI would be trying to solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak but you trust it to fly your star-fighter or deliver your most important messages.

This was a way to represent the different problems AI would be trying to solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak but you trust it to fly your star-fighter or deliver your most important messages.

Result

The final concept divided the Copilot feature into 2 separate agents a user would interact with. The main agent would provide answers and guidance to all your questions, while the support agent followed up, asking if it could complete associated tasks for you. After extended use, the AI would would learn and anticipate common questions you and your colleagues have, while taking initiative to do more functions once you’ve become comfortable with its work.


My next actions would be to design a set of prototype tests to get quick feedback on the concepts. It would be fairly easy and economical to set up an unmoderated study on a remote testing site like Userzoom or Userlytics to gauge people’s attitudes about trust, perception of competence, and ultimately their preferred experience. Over time the study could be updated to test new Copilot skills and user’s attitudes.

Approach

As part of the Connected Technology team, I was asked to explore what these features could look like in our internal chatbot. I was immediately inspired by articles I’ve read over the years. In one study, researchers found that when completing complicated tasks, users felt they received more value when they were able to see the work an algorithm was doing when searching options for a flight to purchase.

Making It Human

Humans tend to anthropomorphize everything. We give our cars names, make movies about emotional robots, and we see faces everywhere when walking down the street. This need to apply human characteristics to inanimate objects could play an interesting role in how we design generative AI in the near future.

Without time or budget to conduct user research, I began with a competitive review to see what common features users would expect to use when interacting with generative AI. It became clear that current models utilize a familiar chat-based interface, allowing the users to interact like a typical conversation. This insight allowed me to focus on designs that turned AI into a conversation partner rather than a new technology.

Design Sprint

As a small project, I chose to work in a design sprint, or a short, focused, defined period of time of one week. This allowed me to follow a modified user-centered design process. I was the only full time resource on the project, and I was able to connect with the other stakeholder groups on an ad hoc basis to present my progress and gather feedback.

So how will we interact with this new technology? Will we be partners achieving goals hand-in-hand? Or will we relegate AI to being subservient helper bots? Would we even want AI to play a leading role in our lives? Although these questions were beyond the scope of my assignment, they helped me create a framework for how the Copilot features could be presented to users.

Exploration

I began exploring concepts by asking how would it affect me if AI was an all-knowing, all-powerful helper? Or what if it were just a group of capable, yet dim-witted assistants? Should it present itself as separate entities with varying skills? Ultimately I settled on a familiar metaphor - the odd couple. Two very different personalities that work together to play off their strengths like C3PO and R2D2.


This was a way to represent the different problems AI would be trying to solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak but you trust it to fly your star-fighter or deliver your most important messages.

Result

The final concept divided the Copilot feature into 2 separate agents a user would interact with. The main agent would provide answers and guidance to all your questions, while the support agent followed up, asking if it could complete associated tasks for you. After extended use, the AI would would learn and anticipate common questions you and your colleagues have, while taking initiative to do more functions once you’ve become comfortable with its work.


My next actions would be to design a set of prototype tests to get quick feedback on the concepts. It would be fairly easy and economical to set up an unmoderated study on a remote testing site like Userzoom or Userlytics to gauge people’s attitudes about trust, perception of competence, and ultimately their preferred experience. Over time the study could be updated to test new Copilot skills and user’s attitudes.