STORYTELLING


Dré Nitze Nelson

Creative Digital Product & Hands on UX/UI Designer

01/2020 San Francisco

Big Screen

How to design for a 48 inch display in-vehicle experience.

PART ONE: Intercation design

After one look at Byton’s sprawling 48” display, people’s next question is always “how do you design for that?” Their concern is mainly around driver distraction. I’d like to refer to David’s post which covers a lot about the position of the screen and other safety related aspects. For now, let’s focus on the digital user experience and interaction models.

After the global dismiss time, all open apps enter the decluttered ‘hero’ state. In the ‘hero state’ the content in ‘focus’ is either centered, augmented, or magnified. For instance, in the media application, the song currently playing will switch to hero state after the global timeout. In this case, the playlist will be hidden and the cover artwork or radio station logo will be centered. This declutters the screen, minimizes distraction and highlights the selected content. To exit hero state, the user needs to interact with the application/feature via voice, touch, hand gesture or gaze.

Our interaction models take advantage of muscle memory making it easy to use. Going further, holistic behavior patterns help to make the system predictable. We simplify complexity by taking advantage of established mental models and common sense.

Our Byton Stage 48 inch display is not a touch screen. Instead, a driver display in the center of the steering wheel allows the driver to interact with the content on the Byton Stage. This driver display has two modes:

  1. Trackpad Mode to conveniently interact with content on our Byton Stage. This mental model is based on remote control or PC trackpad interactions where the user doesn’t have to look at it. With no content displayed on the Driver Display, we can keep the driver’s eyes on the road.
  2. Settings Mode to change seat position and driver settings. This mental model is based on simple list and toggle buttons which we designed to be used safely while manually driving. In contrast to many other cars with head unit, the hand movement in a M-Byte is much shorter, allowing the user to fulfill a task much quicker and therefore, safer. Same for the Co-driver display and RSE.

Observing our testing and studies, users appreciate that we design based on common sense. The intuitive system is easy and fun to use since it resembles the interaction models they’ve learned from using their cell phones, their remote controls and their laptops.

Three ingredients: Predictable behavioral patterns, muscle memory and content hierarchy.

In contrast to many other automotive HMIs, 48 inches is some serious digital real-estate. The benefit is user readability. Suddenly it’s very easy to read street names and POI on our map experience. We have enough surface to structure content in a meaningful way. I call it Context Sensitive Content Distribution based on the principle “Less is More”. While in drive mode, we don't display more content as other cars. The 48 inch surface allows my design team to structure the content much better than in any other cars. Now, the content has room to unfold, font sizes can be big enough to read on bumpy roads and all relevant safety information and warnings can be placed contextually. Users can change the font size similarly to their phone. Our Dark- and Bright Mode and (auto/manual) adjustable brightness of all screens avoids "light smog". The beauty about displays is that we can turn them on when needed, and turn off when not in use. We also can dim all input screens or put them in idle mode.

The entire industry is gravitating toward bigger screens, not only in the car, but on our cell phones and computer displays. The additional digital space makes it easier for the users to consume content, read, watch and work. But there’s more to it. Anticipatory design is intended to reduce users’ cognitive load by making decisions on their behalf. Having too many choices often results in decision paralysis, a state in which a user cannot decide. In such a scenario, a predictive engine optimize the hierarchy of contents based on predictor variables, helping the user make a decision. By introducing a holistic global timeout we make the system predictable. A holistic global timeout allows the user to establish a Byton-specific mental model. For instance, lists, overlays, and other UI elements will auto dismiss after a certain amount of time when not in use. The timing is currently 6 seconds and is subject to iteration.

After the global dismiss time, all open apps enter the decluttered ‘hero’ state. In the ‘hero state’ the content in ‘focus’ is either centered, augmented, or magnified. For instance, in the media application, the song currently playing will switch to hero state after the global timeout. In this case, the playlist will be hidden and the cover artwork or radio station logo will be centered. This declutters the screen, minimizes distraction and highlights the selected content. To exit hero state, the user needs to interact with the application/feature via voice, touch, hand gesture or gaze.

What truly fascinates me when driving our car is you simply don’t notice the Byton stage anymore. It becomes like digital paper. The content is easy to read.

PART TWO: Advanced Technology / Sensor Fusion

Using driver monitoting cameras as additional input for interaction models.

Dynamic Screen Brightness

Driver monitoring cameras are usually used for driver fatigue detection. But we are also able to detect and track the driver pupils. This enables as whole new world of opportunity for safety related user experience design. Pupils tracking can be used to understand where the driver is looking at.
For instance, if the driver looks at the screen, the screen brightness can be increased. When the driver looks are the road, the screen brightness can be decreased.


Dynamic Space Manangemant

But we can much more. At Byton I divided the enormous 48' screen visually into three areas. The area one was used to display relevant driving information such as speed, etc.. The area two and three was reserved to display infotainment contents like a map and media content.
By using pupils tracking, I was able to expend each of the areas depending where the driver is looking at. As designer, I could take full advantage of the available real-estate to beautifully unfold content.


Declutter - stage management

When a “in-focus” area expends, it's content unfolds and it's menu appears. The user now can interact interact with the application which is focus. This is particularly helpful in case of a universal HMI as the context is given and for the user easy to understand. Introducing a “Stage Management” helped to declutter the UI tremendously. But first, I needed to understand what a user considers to be a task! We conducted a few user studies which helped to determine and better to understand. Now, when a task what fulfilled I was able to present a, what I called, “Hero State”.


Example: A user looks at a certain area at the screen, the area in focus expends and the application within this area unfolds to display menu - for instance. The user can scroll and select something. This is considered a task. I added a (global) time out and fade the menu out when I tasks was fulfilled. The UI was declutter as only the selected content was displayed - the “Hero State” was brought to life!


Control Content

At Byton, I was task to find a way to show a movie while driving but not distract the driver. I used the driver monitoring cameras and pupil tracking capability to hide or stop a video begin displayed.
The moment the driver purples moved towards the third areas on the screen where the movie was playing, I stoped the movie or hided the video. Technically, it worked great but I can’t tell with certainty if this feature was either safe for the driver nor a desirable experience.


As I experimented with pupil tracking to explore to what fashion a use can interact with content being displayed, I stated to fuse a variety of other inputs. More about this in the next chapter.


Fuse multiple inputs for intutitive interaction.

Fusion: Pupil tracking and hand gesture recognition.

My next step was to fuse purple tracking with our hand gesture algorithm. My overall goal was to design an intuitive interaction with the lowest possible cognitive load but highest desirability.
My approach was to use the exact same interaction model for the touch and apply these to the application within the area is in focus. The driver, looking at a specific area on the screen, can see the area expending, the context menu unfolding and interact conveniently by hand gesture. No new gesture needed to be leaned. The UI responded imminently to provide easy recognizable feedback. The user was able to scroll a list and to select a list item the same way like on the our touch displays. A consistent design system helped to lower the cognitive load and made it easy and fun to use our system.
To provide an even safer way to use our hand gesture controller, I added an airflow around the center console to better indicate where the hand gesture camera’s sweet spot it.The airflow started and stoped at the moment the camera recognized a hand and its movement. The driver could now place the hand within the camera cone without looking at it as the airflow was strong enough to feel when the hand was in the optimal position.


Fuse multiple inputs to skip the awake word.

Fusion: Pupil tracking and voice commands.

Fusing pupil tracking with our hand gesture algorithm worked out great. Our tests and studies showed that users have been easy able to understand interaction model. It turned out that they loved to use it. My next step was fuse the new gaze and hangggsture fusion with voice commands. My idea was to design an interaction model which allows the user to fulfill a task by switching between the input methods anytime.
For instance: the user began to interact with our system by using hand gesture but should be able to finish the task via voice command.


Our pupil tracking feature provided an understanding in which area the drive is looking at. Either on the road or at one of our three areas at the screen. With that, we had enough context to skip the awake word for voice commands.
In more detail: The application within the “in-focus” areas unfolded it’s menu. The user can now continue to use the system by either hand gesture or voice to interact. Voice commands such as NEXT, DOWN, UP, PLAY PAUSE or even CLOSE were possible to process.
In this case my interaction model worked without using any awake word because of the stage manager which was listing to possible voice commands or able to respond to hang gestures in context to the active application on that area in-focus.


Fuse multiple inputs for safer operation.

Fusion: Pupil tracking and steering

One of my biggest soft concerns of our 48’ shared expense display was the enormous blindspot on the A-pillar passenger side. This become especially for right turns a very difficult safety issue.
My approach was to use ADAS camera live streams to create a “see through” feature. I was simply speaking trying to make the right side of the screen transparent. I combined a few already implemented fusion capabilities and added more contextual inputs such as steering angle and speed. My goal was to cover a few edge case we have identified: sharp right turn in low speed - either forward parking, right turns in tight streets or entering a drive way.
A lower speed threshold and steering wheel angel, combined with gaze tracking seemed to be the right approach. When lower speed, right turn signal on and a certain steering angle to the right combined with pupils detected on the right side of the street, we displayed the forward facing ADAS camera stream on the third area of the 48' shared experience display.

The video stream was aligned with the users pupils to ensure an realistic "transparency" experience. This feature was implement in my driving simulation and interactive seating buck. It became the heart feature of Byton. Unfortunately, it has never been implemented.


Various screens size to optimze in-vehcile experince.

Best and safest experience for manual driving and not driving or L4/5 use cases.

Bigger screens are great for many reasons, especially for entertainment use cases. But how can we design the best experience in combination with safety while manually operating the vehicle and watching a movie while in park or charing?
My approach was to change to size of the screen by moving it or out of the IP. When the while is Drive gear, the display surface is extend enough to provide the optimal real-estate to display all relevant driver related information in a safe fashion without blocking the view of the driver or distraction. When the vehicle is in Park, the display can extend further to increase the surface size to provide an optimal entertainment experience.
Thinks can be simple - sometimes.


-Dré Nitze-Nelson | 安德烈 尼采-纳尔逊