5

Thinking outside the box model for GUI design | UX Collective

 3 years ago
source link: https://uxdesign.cc/what-if-gui-elements-were-not-limited-to-boxes-2f73a9ff9ec4
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

What if GUI elements were not limited to boxes?

Thinking outside the box model in GUI design.

Illustration of a woman pointing at a GUI slider knob that crashes through the boundaries of a box.
Illustration of a woman pointing at a GUI slider knob that crashes through the boundaries of a box.

Buttons, panels, menus, lists — it is tempting to think that most GUI elements occupy rectangular screen regions only. Here we investigate why and how this limits users, UI designers, and developers — and how we can do better. To whet your appetite: The discussed concepts enable widgets that adapt to users in small but helpful ways, such as this slider. It clearly doesn’t believe in staying confined to a fixed box and bends to the thumb‘s movement to increase comfort and reach.

Thumb moving in an arc-like shape on a smartphone. Screen shows a slider UI element that bends itself to match the thumb arc.
Thumb moving in an arc-like shape on a smartphone. Screen shows a slider UI element that bends itself to match the thumb arc.
A slider that bends to the user’s current thumb movement to increase comfort and reach.

To get to this slider and its dynamic kin, we first take a closer look at the box model, then examine its limitations, and finally explore a more powerful alternative model.

Breaking the box into two

First, we observe that GUI elements today have two alignedrepresentations:

  1. Visual: The “look”, e.g. a green button of 120 by 35 pixels.
  2. Functional: A box that defines the button’s active screen area. Clicks/touches within this box trigger the button.

In this “box model”, visual and functional representations are aligned: You trigger a button by hitting its visuals. The figure visualises this alignment.

Side-by-side comparison of a smartphone UI. Left: UI with text fields and button. Right: Corresponding bounding boxes.
Side-by-side comparison of a smartphone UI. Left: UI with text fields and button. Right: Corresponding bounding boxes.
Visual and functional representation of a graphical user interface, using the box model. Original photo by Marianne Krohn.

This aligned “box model” is so fundamental that it is used almost everywhere today, for example on websites and in GUI frameworks for iOS and Android apps. It is simple and useful but it is also limiting, as examined next.

How the box model limits UI designers and users

During my PhD research in Human-Computer Interaction and Machine Learning I found three key limitations of the box model for UI designers and users today:

  • A box is an inadequate model for anything else than a click/tap: A point-in-box test can detect that a user tapped on a button. But what about a slider? What if a phone user clearly slides their finger to the right, yet just a bit below the slider’s visuals? Nothing happens. The box model is static and cannot adequately handle variability, gestures, or generally input that unfolds over time.
  • The box model imposes binary event handling: A click/touch is either in or out of a box. What if the user just barely misses a button? Nothing happens. There is no notion of uncertainty, which could be used, for example, to trigger the most likely action or ask the user for clarification.
  • The box model imposes a 1:1 mapping of GUI elements and users’ input behaviour: Each GUI element can only have one box. Thus, users can trigger each element only in one specific way. For instance, while most users might tap on a button in an app, people with hand tremor might prefer a gesture (e.g. crossing, encircling). The box model does not account for this. It thus hinders designing for accessibility and generally for more individual input behaviour.

My conclusion from this research was this:

With the box model, the only user input behaviour that UI designers can expect to suitably account for is a precise tap/click.

Exploring alternatives to the box model

How might we do better than the box model then? Based on the presented limitations, a promising approach is to investigate models that are more expressive than boxes.

Concretely, here we explore replacing boxes with probabilistic sequence models. Without technical details, we can think of this as assigning a set of gesture recognisers to each GUI element. I implemented this idea in an experimental framework for Android, called ProbUI, as part of my research.

In the experimental “ProbUI” GUI framework, we replaced boxes with sets of probabilistic gesture recognisers. These models of user input behaviour are decoupled from the GUI elements’ visuals.

This approach addresses the three limitations of the box model and leads to the following improvements:

  • Probabilistic event handling: UI designers and developers get a probability that the user intended to hit a target, for each target, updated with each incoming touch event. This unlocks new design opportunities: For instance, an app may ask the user for clarification in case they pressed in between two buttons. Or the app may still trigger a button if the user just barely missed it.
  • Each GUI element can react to multiple input behaviours: Instead of a single box, UI designers and developers may specify any number of input behaviours for the same GUI element. For instance, we might design a button that reacts to both the usual tap as well as to encircling it with the finger. This supports accessibility and individual preferences and input styles.
  • A better model for gestures: Sequence models handle gestures better since they model behaviour over time. We can roughly think of such models as chains of screen areas, instead of a single box. For instance, “slide-to-unlock” is a gesture starting at the left end of the slider, then moving to the right. With such a model, the slider stays responsive even if the user’s finger happens to wiggle a bit, does not start on the slider, or leaves the slider’s visuals on the way.

In summary, the key benefit of this idea — of replacing boxes with probabilistic sequence models — is the expressive power gained by decoupling visual and functional representations of GUI elements. This empowers UI designers and developers to account for uncertainty and for more than one way of using a GUI element.

We next look at three examples of such new GUI elements in action.

Examples: UI widgets that adapt to hand posture

We now return to the bending slider from the beginning: The short clip below showcases the slider and two more examples, implemented with ProbUI.

Video: Example GUI widgets for one-handed use on a smartphone, created with ProbUI.

These widgets demonstrate how thinking outside the box model enables new adaptive UI designs and interactions. In this case, the widgets address the challenge of designing for different hand postures in mobile touch interfaces.

Takeaways

ProbUI’s concepts are not part of current UI frameworks. Luckily, you do not need to wait for this to happen in order to apply these concepts in your UI work today and to benefit from the lessons learned here.

If you are a UI/interaction designer, you can think outside the box model by considering these ideas:

  • Account for input behaviour diversity: Design UI elements such that they can be activated in various ways. For example, a button could react to a tap but also to encircling it with the finger. This helps to accommodate varying preferences and needs, including accessibility considerations.
  • Design micro adaptations: Let UI elements visually adapt to variations in user behaviour. This helps to accommodate varying usage contexts, such as adapting to hand postures, as shown in the examples above.
  • Provide feedforward: We all know about giving feedback after an action. Thinking outside the static box model emphasises designing for dynamic guidance already during interaction. This ties in well with the previous points: Think about visual indicators to communicate multi-use options and micro adaptations to your users.

Conclusion

We have observed that each GUI element has two representations: a visual and a functional one. The key limitation of the popular box model is to assume that these two need to be aligned.

This limits today’s GUIs to handle user input as binary point-in-box tests. Unfortunately, this is only adequate for one kind of input: precise, single taps/clicks.

We presented an approach to account for a wider variety of user behaviour by replacing boxes with sets of probabilistic sequence models. Essentially, each GUI element then gets its own classifier to distinguish and react to multiple different input behaviours.

This unlocks several benefits for creating more responsive, accessible, and “intelligent” GUIs, including the ability to handle uncertain user intention, adequately handling user behaviour over time (e.g. gestures), and allowing for more than one way of using each GUI element.

Thus, thinking outside the box model empowers us to create UIs that better accommodate individual user preferences, contexts, and needs.

More technical details are available in our paper on the ProbUI project.

Image for post
Image for post
The UX Collective donates US$1 for each article published on our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK