Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
IEEE augmented reality learning experience model (ARLEM)
1. A Standard for
Augmented Reality
Learning Experience Models
(AR-LEM)
Fridolin Wild1), Christine Perey2)
1) The Open University, UK
2) Perey Research and Consulting, CH
2. Call for Potentially Essential Patents
If anyone in this meeting is personally aware of
the holder of any patent claims that are
potentially essential to implementation of the
proposed standard(s) under consideration by this
group and that are not already the subject of an
Accepted Letter of Assurance:
Either speak up now or
Provide the chair of this group with the identity of the
holder(s) of any and all such claims as soon as possible or
Cause an LOA to be submitted
3. Agenda
• Welcome by Avron Barr (LTSC)
• Welcome by the Chairs of P1589
• Purpose and goals of ARLEM
• Baseline spec
• Use Cases
3
5. IEEE Standards Working Groups
Identified market problem and solution
– Who will be selling what to whom? What products will be “certified”?
– Solving an identified or anticipated market problem: fractured marketplace,
excessive product integration costs, product incompatibility, vendor lock-in
– Vendors and their customers should see the need for standards, as evidenced
by their participation and sponsorship.
– The specification may be only a part of the solution. Stewardship might also
involve promotion, conformance testing, best practices guides, maintenance,
and continued evolution of the spec.
Participation and governance in the IEEE
– All IEEE LTSC proceedings are open to observers
– The working group gets to decide about membership (individual vs. entity),
fees, voting, and its governance framework generally, within IEEE guidelines.
– Shared IP: http://open-stand.org
5
6. Learning Technology Standards Committee
Current Projects:
Study groups (pre-standard)
– Actionable Data Book. Exploring the future of educational
publication – textbooks that compute.
– Project-based Learning Opportunities. Exploring the possibility of
describing internships and other on-the-job learning opportunities
and building a brokerage system to match prospects with jobs.
– Competencies. Defining a universal language for describing
competency frameworks, which will allow these frameworks to be
compatible and interoperable across communities of practice.
Standards working groups
– Resource Aggregation Models for Learning Education and
Training. Developing ontology based solutions for semantic
interoperability across the various elearning content packaging
schemes.
– Augmented Reality. Developing a standard model for defining AR-
based learning experiences.
6
7. You can’t make a standard!
Pre-standards
Activities
- Principles
- Requirements
- Early Specs
- Prototypes
Standardization
- Compromises
- Champions
- Prototypes
Early Adoption
- Publication
- First Products
- PR
Rude
Awakening
- User feedback
- Revisions
Real Adoption
- Stabilization
- Test Suites
- Products
- Conformance
- Compliance
Only the market can
make a standard
Robby Robson, 2005
9. Digital
Assets
Augmented Reality in 2015
June 3, 2015
PEREYResearch&Consulting
9
Physical World
100,000+ developers with access,
skills and expressed desire to
author AR experiences
Nearly 1B people with at least one
AR-ready device (sensors and
output/display support)
Specific
AR Use
Cases
AR Experience Authoring AR Experience Delivery
10. Many Companies Are Producing AR
Products and Services, but . . .
June 3, 2015
PEREYResearch&Consulting
10
Proprietary
Technology Silos
11. Augmented Reality Developers
and their Experiences
June 3, 2015
PEREYResearch&Consulting
11
Top <10% are responsible for > 50%
80% of developers have only a
few AR experiences
Next 10% are responsible for 20%
12. Mobile AR-Enabled vs. Users
1B Smartphones with all necessary sensors
and graphics acceleration hardware
Only <10% are users of mobile AR
June 3, 2015
PEREYResearch&Consulting
12 12
14. June 3, 2015
PEREYResearch&Consulting
14
What is Open and Interoperable AR?
Complete end-to-end
system in which modular
components can be
supplied by multiple
vendors and still have the
same workflow and
experience quality
A set of
shared
values…
a “school of
thought”
about
Augmented
Reality
15. From silos to open systems
June 3, 2015
PEREYResearch&Consulting
15
16. Any Digital
Assets
Open and Interoperable AR
June 3, 2015
PEREYResearch&Consulting
16
Millions of developers with access,
skills and expressed desire to
author AR experiences
Billions of people with at least one AR-
ready device (sensors and
output/display support)
Any
AR Use
Cases
AR Experience
Authoring
Tools and Workflows
AR experience on any
form factor and using
any standards-compliant
software client
17. Open and Interoperable AR
June 3, 2015
PEREYResearch&Consulting
17
Permits consistent and flexible content and technology
integration and management
Interoperability simplifies the developer’s
AR experience
Authoring
Publishing
Integration
Interoperability increases user’s
Discovery
Sharing
Consuming
18. Where Would You Rather Start?
June 3, 2015
PEREYResearch&Consulting
18
Existing Standards
and Modern Tools
Or Raw
Materials and
Primitive Tools
19. June 3, 2015
PEREYResearch&Consulting
19
AR Community
Grassroots community of people since 2009
Seek open and interoperable
AR content and experiences
Brings together standards development
organizations and developers
Operate
A Web portal
Seven archived mailing lists
Conduct virtual and in-person meetings
20. Notable Achievements to Date
June 3, 2015
PEREYResearch&Consulting
20
Initiatives
Cross-SDO (OGC, Khronos Group, ISO, Web3D) collaboration
to address 3D Compression and Transmission
Development of
AR Browser
interoperability
Resources
Tables of relevant standards and status of active SDOs
Calendar of meetings and events
Glossary of AR terminology
Mixed and Augmented Reality Reference Model (ISO)
21. Relevant Industry Groups and
Standards Organizations
June 3, 2015
PEREYResearch&Consulting
21
National
Standards
Organizations
22. Most Active Standards Groups
June 3, 2015
PEREYResearch&Consulting
22
Mixed and Augmented Reality
Reference Model (MAR RM)
AR Application Format (ARAF)
WebGL
glTF
OpenVX
OpenKCam
StreamInput
3D Medical Display
Streaming Media Quality
Streaming to Mobile
xAPI
Simulation and Virtual Reality
ARLEM
ARML 2.0
IndoorML
OWS Context
GeoPackage
Moving Features
Points of Interest
24. The Cost of Integration
Studies show:
– 30% of the time in software development
projects is spent on interface design and –
implementation (Schwinn & Winter, 2005)
– 35% to 60% of the IT budget are spent on
development and maintenance of interfaces
(Ruh et al., 2001)
Rising heterogeneity and integration
demand (Klesse et al., 2005)
25. Status Quo in Learning Technology
Plethora of (standard) software:
C4LPT lists over 2,000 tools
Existing learning object / activity
standards lack reality support
Multi-device orchestration
(think wearables!)
=> enterprises and institutions face
interoperability problems
26. Interoperability
“…this means that independently developed software
components can exchange information so that they can be used
together.”
(Duval, 2004)
“… is the ability to transfer and use information in a uniform and
efficient manner across multiple organisations and information
technology systems.” (Noie, 2003)
”…is a property that emerges, when distinctive information
systems (subsystems) cooperatively exchange data in such a way
that they facilitate the successful accomplishment of an
overarching task.” (Wild et al., 2007) http://ceur-ws.org/Vol-309/paper01.pdf
34. From the AR Community Meeting
@ MIT media labs, Cambridge/MA, 24.-25.3.15
Use Cases
34
35. Use Cases (1)
‘Wet rehearsal’. Simulation in the real context: Rehearsal on the actual
workplace and actual objects, newbie training, before they can do the
real thing. Recording Standard Operating Procedures (for an audit) is
such an example.
Assessment. Experience recording helps collect evidence of task
performance ‘by the book’ (and can be replayed to others). Such
recordings on-push-of-button or at-hot-spots can be later brought up
again to support career development: evidence helps assess, where
training is required and proves whether staff is able to do the job within
the specs required. Service technicians.
Quality Inspection. Assessment comes in many disguises, quality
inspection for high precision jobs being one of them. In manufacturing,
for example, product assurance is key.
Experience recording. Active AR in authoring mode is used in a ‘show
and tell’ way to extract key steps from existing documentation. User
generated content can be used to convert existing technical
documentation into augmented documentation.
http://bit.ly/arlemusecases
36. Use Cases (2)
Health & Wellness Learning.
Imaging, wearable sensors, and biometrics enable the enlightened
patient to better control wellbeing, using direct biofeedback to
understand and modify own behavior. For example, visualizing x-ray or
MRI data in situ on the body, using an interactive ‘mirror’, helps people
understand conditions in a better way. Physical therapy for
rehabilitation, patient self-help, Yoga Trainer, etc. – all work the same
principles: understand better what’s happening inside of you and use it
to your advantage.
Maintenance.
Not only mechanics are able to do repairs and maintenance operations.
Many products today are not repaired, but disposed, when faults occur,
as the cost of professional labour (and travel of engineers or postage)
often is more expensive than producing a new unit. Changing the motor
on a washing machine, replacing a chain, gearbox, or brakes on a bike,
changing the electronic window levers on a car, supporting installation of
a complex wire harness: the amount of AR-supported DIY opportunities
is sheer endless.
36
http://bit.ly/arlemusecases
37. Use Cases (3)
Remote Tutoring. Not only professionals, but also home users with a
certain level of manual dexterity would benefit a lot from live tutoring
and guidance, receiving remote support in situ and on the job. Stuck
with changing the motor in your washing machine? Call the service
agency on the smart glasses to receive live hands-on guidance.
Resumé Service. Human Resources would so love to visualize
experience of candidates, enhancing the resume. Check compliance of
workers is an example (for compliance assessment).
Tangible Learning Objects. Using 3D-printing and Internet-of-Things
hardware, we can breathe new life into objects, using their tangible
features as interfaces to software functionality and logic. A relay box
simulator is an example of this.
Work Shadowing. For complex tasks, it is often best to learn from the
best and see things ‘through the eyes of the master’. AR may well be
the game-changer, as it provides cost efficient ways with passive mode
AR to watch a master in action – at scale.
…
37
http://bit.ly/arlemusecases
40. The Activity Model
40
“find the spray
gun nozzle size
13”
Messaging in the
real-time
presence
channel and
tracking to xAPI
onEnter/onExit
chaining of
actions and
other
activations/deact
ivations
Styling
(cascading) of
viewports and
UI elements
Constraint modeling:
specify validation
conditions and model
workflow branching
e.g. smart player;
e.g. search widget
http://bit.ly/arlem-input
41. The Workplace Model
41
The ‘tangibles’:
Specific persons,
places, things
The ‘configurables’:
devices (styling),
apps+widgets
The ‘triggers’:
Markers trigger
Overlays; Overlays
trigger human action
Overlay ‘Primitives’:
enable re-use of e.g.
graphical overlays
http://bit.ly/arlem-input
42. Create a new activity
Create a new xml file, best name it something like ‘activity-myname.xml’
Add the activity element :
<activity>
</activity>
Add the following attributes to the <activity …> element:
– id=”myshortname”
(no spaces): this will serve the indexing so that we can find activities
later)
– name="Assembly of cabinet”
human readable description of the activity
– language="english"
– workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/wo
rkplace-AIDIMA.xml"
link to the workplace model – you can use one workplace for all activities,
or different workplaces for different activities
– start="start”
id of the action to start with
43. This is what the file looks like now
<activity
id="assembly"
name="Assembly of cabinet"
language="english"
workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/workplace-AIDIMA.xml"
start="start”
>
</activity>
44. Add your first action step (‘start’)
We just defined that the action step to begin with has the id ‘start’, so
we create this action step:
Add (in between the two <activity> element codes:
<action
id=‘start’
viewport=‘actions’
type=‘actions’
>
</action>
45. The dialogue box of the action
We want a human readable instruction to be visible (not just an action
step that displays overlays or 3d models or videos), so we add the
following in between the two <action> codes:
<instruction><![CDATA[
<h1>Assembly of a simple cabinet</h1>
<p>Point to the cabinet to start…</p>
]]></instruction>
46. Entry, Exit, Trigger
To define the flow of the actions, we have to
define what ‘triggers’ the state change
Moreover, we want to define what shall
happen, when the action is launched
(‘entered’) and when the trigger moves to
the next action (or whatever the ‘exit’
statements define)
47. Example enter/exit/trigger
<enter removeSelf="false">
</enter>
<exit>
<activate type="actions" viewport="actions"
id="step2"/>
<deactivate type="actions" viewport="actions"
id="start"/>
</exit>
<triggers>
<trigger type="click" viewport="actions" id="start"/>
</triggers>
Nothing (for
now)
On exit:
launch step2
On exit: remove
dialogue box ‘start’
This action step
shall be exited by
‘clicking’ on the
dialogue box
48. Your script now:
<activity id="assembly" name="Assembly of cabinet" language="english"
workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/workplace-AIDIMA.xml"
start="start">
<action id=‘start’ viewport=‘actions’ type=‘actions’>
<enter removeSelf="false">
</enter>
<exit>
<activate type="actions" viewport="actions" id="step2"/>
<deactivate type="actions" viewport="actions" id="start"/>
</exit>
<triggers>
<trigger type="click" viewport="actions" id="start"/>
</triggers>
<instruction><![CDATA[<h1>Assembly of a simple cabinet</h1><p>Point to the cabinet
to start ... </p>]]></instruction>
</action>
<action id="step2" viewport="actions” type=“actions”>
<enter></enter>
<exit removeSelf="true”></exit>
<triggers>
<trigger type="click" viewport="actions" id="step1"/>
</triggers>
<instruction><![CDATA[<h1>step2</h1><p>do this and that.</p>]]></instruction>
</action>
</activity>
49. Working with ‘tangibles’
Utilise computer vision engine to
detect things/places/people
(=tangibles)
Define tangibles in the
workplace model
Then activate (or deactivate) what
shall be visible and relevant in each
action step
50. In the workplace model
We open the workplace model and define a new thing (under
resources/tangibles/things):
<thing id="board1" name="Cabinet"
urn="/tellme/object/cabinet1" detectable="001">
<pois>
<poi id="leftside" x-offset="-0.5" y-offset="0"
z-offset="0.1"/>
<poi id="default" x-offset="0" y-offset="0" z-
offset="0"/>
</pois>
</thing>
The id is what we
will reference
The detectable
specifies, which
marker (or
sensor state)
will be bound
to the thing
Poi = point of interest:
specify locations relative
to centre of marker
(x=y=z=0: centre)
51. Markers and pre-trained markers
Marker must be defined in the workplace model
It shall be possible to provide pretrained markers (and
their PDF file to print): these markers shall be named 001
to 050
Markers shall be specified via their id in the computer
vision engine (under resources/triggers/detectables):
<detectable id="001" sensor="engine" type="marker"/>
52. Activates and deactivates
Now we have defined a thing called ‘board1’ and we have tied it to the
marker 001
We can start referring to it now from the activity script: we can, e.g.,
activate pictogram overlays for the verbs of handling and motion
<activate
tangible="board1"
predicate="point"
poi="leftside"
option="down”
/>
53. Your script<activity id="assembly" name="Assembly of cabinet" language="english"
workplace="http://crunch.kmi.open.ac.uk/people/~jmartin/data/workplace-AIDIMA.xml"
start="start">
<action id=‘start’ viewport=‘actions’ type=‘actions’>
<enter removeSelf="false”>
<activate tangible="board1" predicate="point" poi="leftside" option="down"/>
<activate tangible="board1" predicate="addlabel" poi="default"
option="touchme"/>
</enter>
<exit>
<deactivate tangible="board1" predicate="point" poi="leftside"/>
<deactivate tangible="board1" predicate="addlabel" poi="default"/>
<activate type="actions" viewport="actions" id="step2"/>
<deactivate type="actions" viewport="actions" id="start"/>
</exit>
<triggers>
<trigger type="click" viewport="actions" id="start"/>
</triggers>
<instruction><![CDATA[<h1>Assembly of a simple cabinet</h1><p>Point to the cabinet
to start ... </p>]]></instruction>
</action>
<action id="step2" viewport="actions” type=“actions”>
<enter></enter>
<exit removeSelf="true”></exit>
<triggers>
<trigger type="click" viewport="actions" id="step1"/>
</triggers>
<instruction><![CDATA[<h1>step2</h1><p>do this and that.</p>]]></instruction>
</action>
</activity>
Display an
arrow pointing
downwards on
the point of
interest ‘leftside’
Display a label
‘touchme’ at the
centre of the
marker
Remove both
visual overlays
when this action
step is exited
54. 3D overlays, image overlays, videos
Besides the verb primitives and the label, there shall be ‘generics’ that
can be used to embed video, images, or animations:
<activate tangible=”board1"
predicate="addanimation"
poi="leftside"
option="1"/>
Animations shall either be embedded in the app
or be downloaded from the web (url)
Animations can have ‘states’, addressed via the ‘option’ attribute
(option=0: invisible; option=1: animation step 1, option=2: animation
step 2…)
<activate tangible=”board1" predicate=”addvideo" poi="leftside"
option=“http://myurl.org/myvideo.mp4"/>
<activate tangible=”board1" predicate=”addimage" poi="leftside"
option=“http://myurl.org/myvideo.png"/>
55. Verb Primitives
All verbs need the ‘id’ of the
tangible, some of them have ‘POIs’
that they need as input, few have
‘options’
'point': poi + options = up, upperleft, left, lowerleft,
down, lowerright, right, upperright
'assemble’, ‘disassemble’
‘close’
‘cut’: poi
'drill': poi
'inspect': poi
'lift':
'lower’:
'lubricate':
'measure': poi
'open’
‘pack’
‘paint’
‘plug’
'rotate-cw’, 'rotate-ccw': poi
'screw': poi
'unfasten': poi
'unpack
'unplug’:
'unscrew': poi
'forbid':
'allow':
'pick':
'place':
57. Triggers and tangibles
If you add a tangible trigger (for ‘stareGaze navigation’), an target icon
will be overlaid, rotating in yellow, turning green when the stare
duration (3 secs) has been reached
<trigger type="detect" id="board1" duration=”3"/>
58. Warning signs
Add an enter activation:
<activate tangible=”board1" poi=“leftside” warning="p030"/>
…
60. Open Problems
Real-time messaging (multiuser, multi-device)
Revision needed: xAPI auto-logging
query language for constraint validation
Performance analytics
Validator service
LEM aggregator (‘Open LEM’)
61. Your Reference Implementation
Design Competition at the 10th ECTEL conference 2015:
Envisioning Wearable Enhanced Learning:
– 500 word abstract (approx. 2 pages)
– and design samples (e.g. mock-ups, videos, prototypes)
Wearable Enhanced Learning (WELL) is emerging to be a
transformational step in the transition from the desktop age through the
mobile age to the age of wearable, ubiquitous computing.
ECTEL (http://www.ec-tel.eu/) will take place
– Toledo (Spain)
– 15 - 18 September 2015
http://bit.ly/sigwellcompetition