In human-centric AI, UX and software roles are evolving

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


Software development has long required the skills of two types of experts. There are people who are interested in how a user interacts with an application. And those who write the code that makes it work. The boundary between the user experience (UX) designer and the software engineer is clear. But the advent of “human-centered artificial intelligence” is challenging traditional design paradigms.

“UX designers use their understanding of human behavior and usability principles to design graphical user interfaces. But AI is changing how interfaces look and work,” said Hariharan “Hari” Subramonyam, a research professor at the Stanford Graduate School of Education and a fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

In a new preprint, Subramonyam and three colleagues from the University of Michigan show how this frontier is shifting and have developed recommendations for ways the two can communicate in the age of AI. They call their recommendations “desirable leaky abstractions.” Leaky abstractions are practical steps and documentation that the two disciplines can use to convey the details of their low-level vision in language the other can understand.

Read the study: Human-AI Guidelines in Practice: The Power of Leaky Abstractions in Cross-Disciplinary Teams

“With the help of these tools, the disciplines leak important information back and forth about what was once an impenetrable border,” explains Subramonyam, himself a former software engineer.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

Less is not always more

As an example of the challenges AI poses, Subramonyam points to facial recognition used to unlock phones. Once upon a time, the unlock interface was easy to describe. User swipes. Keyboard appears. User enters the access code. Application authenticates. User gains access to the phone.

However, with AI-inspired facial recognition, UX design is starting to go deeper than the interface to the AI ​​itself. Designers need to think about things they’ve never done before, like the training data or the way the algorithm is trained. Designers find it difficult to understand AI capabilities, describe how things should work in an ideal world, and build prototype interfaces. Engineers, in turn, find that they can no longer build software to exacting specifications. For example, engineers often view training data as a non-technical specification. That is, training data is someone else’s responsibility.

“Engineers and designers have different priorities and incentives, which creates a lot of friction between the two fields,” Subramonyam says. “Leaky abstractions help reduce that friction.”

Radical Reinvention

In their study, Subramonyam and colleagues interviewed 21 application design professionals — UX researchers, AI engineers, data scientists and product managers — across 14 organizations to conceptualize how professional collaborations are evolving to meet the challenges of the artificial intelligence era.

The researchers have prepared a number of leaky abstractions for UX professionals and software engineers to share information. For the UX designers, suggestions include things like sharing quality codebooks to communicating user needs in training data annotation. Designers can also storyboard ideal user interactions and desired AI model behavior. Alternatively, they can include user testing to provide examples of defective AI behavior to facilitate iterative interface design. They also suggest that engineers be invited to participate in user testing, a practice not common in traditional software development.

For engineers, the co-authors recommended leaking abstractions, including assembling computational notebooks with data attributes, providing visual dashboards that establish AI and end-user performance expectations, creating spreadsheets with AI outputs to facilitate prototyping, and using the various “buttons”. available to designers who can use them, among other things, to refine algorithm parameters.

The authors’ main recommendation, however, is that these collaborating parties delay establishing design specifications for as long as possible. The two disciplines have to fit together like pieces of a jigsaw puzzle. Less complexity means an easier fit. It takes time to polish those rough edges.

“In software development, there is sometimes a misalignment of needs,” Subramonyam says. “Instead, if I, the engineer, create a first draft of my puzzle piece and you, the UX designer, make yours, we can work together to address the misalignment across multiple iterations, before we go into the details of the only when the pieces finally fit do we tighten up the application specifications at the last minute.”

In all cases, the historical boundary between engineer and designer is the enemy of good people-centered design, Subramonyam says, and leaky abstractions can penetrate that boundary without completely rewriting the rules.

Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Hai.stanford.edu. Copyright 2022

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published.