Over the last decade we have seen various research on distributed user interfaces (DUIs). We provide an overview of existing DUI approaches and classify the different solutions based on the granularity of the distributed UI components, location constraints as well as their support for the distribution of state. We propose an approach for user-defined cross-device interaction where users can author their customised user interfaces based on a hypermedia metamodel and the concept of
active components. Furthermore, we discuss the configuration and sharing of customised distributed user interfaces by end users where the focus is on an authoring rather than programming approach
3. 0% 5% 10% 15% 20% 25% 30%
0
1
2
3
4
5 or more
Percentage of users
Numberofdevices
How many connected devices do people use?
Belgium Switzerland USA Spain Japan
The Connected Consumer
Survey 2014/2015
Google Inc.
65%
5. Existing Classifications
"The 4C Reference Model for Distributed User Interfaces"
by Demeure et al.
• computation, configuration, communication and coordination
"Distributed User Interfaces: State of the Art"
by Niklas Elmqvist
• input, output, platform, space and time
6. UI + data +
UI elements
UI + data
Table /
Camera
Room
Network connection
to the server
Anywhere
HuddleLamp
MultiSpace
ReticularSpaces
Panelrama
Conductor
Connichiwa
Granularity of
distribution
Location
constraint
7. UI + data +
UI elements
UI + data
Table /
Camera
Room
Network connection
to the server
Anywhere
HuddleLamp
Airlift
MultiSpace
ReticularSpaces
ARIS
GroupTogether
iLand
iRoom
Panelrama
Conductor
IMPROMPTU
THAW
Deep Shot
Weave
Connichiwa
XDStudio
WebSplitter
Melchior et al. (2009)
CAMELEON-RT
Granularity of
distribution
Location
constraint
Robertson et al. (1996)
Frosini et al. (2013)MultiMasher
Pick-and-Drop
End Users
8. How to allow end users to define customised
cross-device interactions?
How much control will end users have in
terms of the granularity of the UI
components to be distributed?
Will end users be limited by a
specific location, space or office
setting?
Will end users be able to
share their configuration of
customised DUIs?
Can end users reuse parts of
other configurations?
What will end users be able to
modify?
…
11. Swipe AC
Double
Swipe AC
Triple
Swipe AC
Direction
AC
Swipe Area
TABLET
Sound
AC
TV Runtime
environment
Runtime environment
...
Gesture AC
Play
AC
Data
Transfer AC
Data
Transfer AC
Proposed Approach
[28] Signer and Norrie As We
May Link: A General Metamodel
for Hypermedia Systems.(2007)
12. Authoring Rather Than Programming
Swipe Gesture
Device: Tablet
Area
Device: Tablet
DataTransfer
SDevice: Tablet
TDevice: TV
Play
Device: TV
Sound
Device: TV
Volume: 80%
_ X
Components
DataTransfer
Play
Sound
Swipe
Area
Double Swipe
Triple Swipe
Direction
...
13. Conclusion
• Classification of DUI systems
• User-defined cross-device interactions
• Linking UI components and application logic
• RSL hypermedia metamodel
• Arbitrary level of granularity
• Sharing of user-defined interactions
15. Reference
SanctorumA. and Signer B. :Towards User-defined Cross-Device
Interaction. In the Workshop on Distributed User Interfaces. Lugano,
Switzerland (2016).
Editor's Notes
Hello everyone, I’m Audrey Sanctorum from the Vrije Universiteit Brussel and I will presented our paper “Towards User-defined Cross-Device Interaction”
As many of us know electronic devices have grown in popularity the last few years.
As many of us know, nowadays users have a lot of different electronic devices
According to a survey/ study by Google, many people use more than one device in their daily activities.
We can see for example in Japan that 25% of users use 2 devices.
In Switzerland 65% of the people use 2 or more devices in their daily lives.
https://www.consumerbarometer.com/en/graph-builder/?question=M3&filter=country:japan
On this slide we see some of the research that has been done to simplify the use of multiple devices (the last few years).
Of course this is only a small amount of the research that exists… all these systems allow the distribution of some data, like figures and documents, or allow a whole UI to be distributed, like we can see here in ReticularSpaces. Some systems even allow to distribute parts of the UI on multiple devices, like we see here in Panelrama and Connichiwa.
To achieve this distribution certain systems have a restricted interaction space, like the Huddlelamp which allow this distribution only on a table surface that is recognised by a mounted camera. Others are limited by a room as interaction space, such as MultiSpace, even others are limited by a specific network, where all devices need to be connected to a central server, like in the Conductor and Panelrama systems.
Then finally, there are systems like Connichiwa that do not rely on a remote server () and allow distribution of user interfaces anywhere the users want.
(webserver on-demand on one of the joined devices instead)
(Since there are many different existing systems,) in order to have a good overview of these systems we need some classification. Here you can see two proposed classifications (based on certain dimensions).
The first one is from Demeure et al. and the second one from Elmqvist.
Now, since we were more interested in other dimensions, we proposed another classification based on the location and granularity of the distribution, and whether state transfer is possible or not.
Space (S). The interface is restricted to the same physical (and geographic) space, or can be distributed geographically (i.e., co-located or remote interactive spaces [2]).
So… We see here, first of all our three dimensions, we have the location, the granularity, and by the color encoding we have our third dimension, namely the state transfer.
Let's have a look at the first dimension which is the location, as we can see here we go from very local solutions, like on a table to anywhere,
we have seen already earlier, we have the Huddlelamp who fits in the very local one then we have seen one, Multispace which can be used in a room, then we have also seen, the conductor and Panelrama systems which work on the network and finally we have seen ReticularSpaces and Connichiwa which can be used anywhere...
Then in the second dimension we talk about the granularity of the distribution. Here we differentiate between systems that only allow data such as images (like HuddleLamp) to be transferred or/and a complete UI (like ReticularSpaces) and systems that provide a finer granularity and also allow parts of Uis to be distributed like we can clearly see here in Panelrama and Connichiwa for example.
And finally, we have our third dimension that indicates whether the system supports state transfer, this means that the state of the UI or data element will also be distributed. Such systems can allow synchronised views. Here for example, we see that the highlighting of the distributed documents is synchronised.
(5mins tot hier)
(Of course, like I said, there are many other systems)
*show everything*
This is the full picture, but I will not talk about that in detail, you can find the details in the paper.
Our goal is to have a system with no restriction in terms of location and a super fine granularity, so we aim to be somewhere there.
(The approach we present here / propsed solution we aim to have no restriction in terms of location and a super fine granularity, so we aim to be somewhere there.)
As you can see there are already some systems there, however they don't support end users, these systems focus on providing developers the possibility to easily create DUIs. We want to enable end users to easily create DUIs.
(we are not the first ones in that area, there are some systems there but they are for developers and not for end users)
Now the question is… how can we enable end users to define their own customised interactions across different devices?
And thus allow end users to create, modify and reconfigure DUIs.
Therefore we need to ask ourselves some more questions, like:
What will end users be able to modify?
How much control will we give to end users in terms of granularity of the distribution?
Will they be able to share their DUI configurations?
Can they reuse parts of other configurations?
Will they be limited by a specific setting/location?
After reflecting over these questions, and based on our background research we came up with the following architecture. We can see here that developers still play a major role, since not everything can be done by end-users. Developers will build building blocks for allowing the distribution of UIs and UI comps and end users will be able to use and reconfigure these building blocks as they want to build their own DUIs. These building blocks will be stored in the Developer Registry in the form of active components. Then in order to also allow some functionality from third party applications, we also propose a plugin mechanism. This means that developers could create plugins to give access to, for example, functionality from PowerPoint. End users could then link these components and plugins together to configure their own customised cross-device interactions. These configurations will then be stored in the Configuration Pool. Each user will also have his own user profile which will be stored in the user profile database here.
Since we want end users to link different components to allow new cross-device interactions, as good practice we will use RSL a hypermedia model to manage all the links.
Now what I said might seem quite complex, so in order to understand this concept of linking of different components let’s look at some example scenarios where this could be very useful.
In our first scenario, an end user (let’s say Sophia) wants to transfer a video playing on her tablet to her TV. She not only wants the video to continue playing on the TV but also wants the sound of the TV to increase. In order to allow such an interaction Sophia could use a building block or component made by the developers that allow the transfer of a video and one to increase the sound and combine them together.
Furthermore she wants this interaction to be triggered by a swipe gesture on her tablet. For this, our idea is that Sophia could define a swipe area on her tablet, and combine it with a component that recognises the swipe gesture.
For our second scenario, an end user (let’s say John), wants to transfer a picture from his smartphone to his computer. Since he often wants to modify his pictures after an image transfer he wants to build a system that automatically opens his picture in PhotoShop after the transfer. Therefore, John wants to define two different interactions. One that allow the image to be simply transferred via a swipe gesture and one that will be transferred and then opened in PhotoShop via a double swipe. In order to achieve this scenario, John could easily combine the simple swipe recogniser component to a component that allow image transfer and combine a double swipe recogniser building block to one that allow image transfer and one that opens a file in a specific software, in his case: PhotoShop.
So… To allow these kind of interactions, we propose the following approach.
As we said earlier, we want to have an area that will be linked to some components.
Here you can see how the different components would be linked to allow the first scenario for Sophia and her video transfer.
The swipe area is linked with a swipe recogniser component or building block. This one will trigger the data transfer component in the tablet runtime environment, which will in turn trigger the data transfer component in the TV runtime environment. And this last one will trigger the Sound and Play component, which will allow the video to continue playing on the TV and the sound of the TV to increase.
Now, as you can see here, there are much more possibilities…
The double swipe can for example also be linked to the swipe area, which makes the second scenario possible. John would then use/link other existing components to realize the rest of the scenario.
In order to allow this linking of component, as good practice we will have an hypermedia model to manage the links. This will also allow the grouping of different components as we can see here with the Gesture component. Furthermore, different component could be linked together allowing different actions to happen at the same time, such as the Sound and Play component. Different gesture components could also be combined to allow more specific interactions, such as double swipe left, where the double swipe and direction component would be linked.
Of course, we don’t want our end users to need any programming knowledge to make such interactions, that’s why we chose a authoring rather than programming approach.
Our idea is that the linking of the different components would be done via an authoring tool in the same style as Yahoo Pipes, where users would simply have to drag and drop different components made by the developers.
So, to conclude, we proposed a new classification for DUI systems and proposed a new approach for user-defined cross-device interaction via a hypermedia metamodel where UI components can be linked to different application logic at any level of granularity based on active components
Then finally, via our architecture we propose the sharing of the user defined interactions