SlideShare a Scribd company logo
1 of 92
Download to read offline
HANA ‘The Why’
Henry Cook, SAP HANA Global Centre of Excellence
January 2016 https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 2Public
Special Note:
This slide deck is provided for those wishing to gain a copy of the slides to the “Why HANA”
presentation published on YouTube and as a Blog.
The best way to consume this presentation is to first watch it being presented, then to use
these slides as reminders, or as supporting material for your own meetings
The video can be reached through the following two links. The first is a blog which provides
context, an introduction, and a link to the video. The second link goes directly to the video.
https://blogs.saphana.com/2016/03/11/hana-the-why/
https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
Once downloaded, the first part of this SlideShare (Slides 1-46) can be viewed or used just as
they appear in the video itself.
The second part of the SlideShare (Slides 48-92) provides speaker notes for all the slides. This
can be used to revise or clarify particular topics within the presentation
We hope that you find this useful in progressing along your own HANA journey!
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 3Public
R/2
1979
Mainframe
R/3
1992
Client Server
ERP
2004
Web SOA
2015
IoT API’s Mobile
A logical evolution each being necessary to provide significant new
business capability and escape the constraints of the past
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 4Public
Our Vision
help the world run better
and improve people’s
liveslives
Our Passions
teamwork, integrity,
accountability,
professionalism and trust
Accelerating business innovation through radical simplification
SAP 7-8 years ago
 Impeded by complexity,
 large, complex suite of applications
 15 month release cycles,
 Surrounded by increasingly nimble competitors,
 Dependent upon others for data management,
 Incurring large development costs
Key Question: How to get ahead and stay ahead of the market ?
Strategic response: Massive Simplification of what we do
This simplification is what we now know as HANA
Our Mission
help organizations
become best-run
businesses
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 5Public
Traditional ERP system architecture
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DB DB
DB DB
DB DBDB DB DB DB DB
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 6Public
Transformation through Simplification
Effort / Services
/ Admin
Software
Hardware
Cost
Effort / Services / Admin
Software
Hardware
Innovation
Resources
saved and
diverted
New Function,
Revenue and
profit
Investment
Simplification
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 7Public
HANA Simplification shows up as:
Productivity
Users
Developers
Agility
 Faster response, time to market
 Easier change
TCO
 Radical simplification of IT landscape
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 8Public
How do traditional systems impede benefits?
Hard to combine multiple techniques
Have to spend time in application integration
Hard to combine data sources: Time &
cost of using data stores
Data
Data
Data
Redundant Pre-aggregated data
60-95% of our data objects, and thus
effort (design, tuning, maintenance), are
due to these supporting data
Qn. Ans.
Mins / Hrs / Days Slow response times: Destroys ‘flow’ and
productivity. Forbids ‘heavy math’
Qn.
SQL Predictive Text Ans
Data
Data Data
Data
Data
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 9Public
The Motivating Idea
In-Memory Data Management:
An Inflection Point for Enterprise
Applications.
Hasso Plattner Alexander Zeier
ISBN 978-3-642-19362-0
The In-Memory Revolution: How SAP
HANA Enables Business of the Future
21 Apr 2015
by Hasso Plattner, Bernd Leukert
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 10Public
History 2007 – The Business Goals of HANA
Source: SAP Suite on HANA announcement: January 10, 2013
Qn: 14 years after R/3, what would an ERP (Enterprise) system look like if we start
from scratch? [Hasso Plattner Institute, Potsdam]
All Active Data In-Memory Leverage Massively Parallel Computers
(Scale with Cores vs. CPU Speed)
Use Design Thinking
Methodology
Radically Simplified Data Model OLTP and OLAP Back Together Instant BI
(on Transactional Data)
No More Batch Programs Live Conversation
(Instead of “Briefing Books”)
Response < 1 Sec
(For all activities, Even
complex Algorithms)
Virtual DW for Multiple Data
Sources and Types
No Aggregates or
Materialized Cubes
(Dynamic Views)
Views on Views
(Up to 16 Levels)
Mobile, Wherever AppropriateAggressive Use of Math
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 11Public
HANA Techniques: How to Build a New Kind of Enterprise System
MAP REDUCE
ACTIVE AND PASSIVE
DATA STORE
ANALYTICS
ON HISTORICAL DATA
SINGLE AND MULTI-
TENANCY
REDUCTION IN
LAYERS
3DCALL
+ T +
MULTI-CORE /
PARALLELIZATION
DYNAMIC
MULTI-THREADING
VIRTUAL
AGGREGATES
PARTITIONING
MINIMAL
PROJECTIONS
NO DISK
OPERATION INSERT ONLY
REAL-TIME
REPLICATION
ANY ATTRIBUTE AS
AN INDEX
TEXT
ANALYTICS
OBJECT TO RELATIONAL
MAPPING
GROUP KEYS
LIGHTWEIGHT
COMPRESSION
ON THE FLY
EXTENSIBILITY
SPATIAL
TRANSACTIONAL
COLUMN STORE
SQL INTERFACE ON
COLUMNS AND ROWS
LIBRARIES FOR
STATS & BIZ
BEYOND SQL
Global development, 2007: Seoul, Shanghai, Ho Chi Minh, Bangalore, Tel Aviv, Berlin, Walldorf, Paris, Toronto, Vancouver, Dublin CA, Palo Alto
12© 2016 SAP SE or an SAP affiliate company. All rights reserved.
https://www.youtube.com/watch?v=jB8rnZ-0dKw
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 13Public
Preconfigured Appliance (or VM or Cloud)
■ In-Memory software + hardware
(HP, IBM, Fujitsu, Cisco, Dell, NEC, Huawei, VCE)
In-Memory Computing Engine Software
■ Data Modeling and Data Management
■ Real-time Data Replication via Sybase Replication Server
■ Data Services for ETL capabilities from SAP Business Suite, SAP BW and 3rd Party
Systems
■ Data Federation: Remote RDBMS, Hadoop …
Components
■ Row Store
■ Column Store
■ Calc Engine
■ Graph Engine
■ Application Server (XS server)
HANA is designed to be more than just a Database
MDX SQL BICSSQL
Modeling
Studio
Real–Time Replication,
Federation
Data Services
ETL / ELT
SAP HANA
Other Applications SAP BusinessObjects
SAP NetWeaver
BW
SAP Business
Suite
3rd Party
In-Memory Computing Engine
Calculation and
Planning Engine
Row & Column
Storage
■ Predictive Analytics Library
■ ‘R’ Interface
■ SQL Script / Calc Engine Language
■ Text Engine
■ Planning Engine
■ Spatial
■ Business Function Library
■ Persistance (logging, recovery) , ACID
transaction integrity
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 14Public
There has been a revolution in hardware
20 Years Ago
Now Near Future
Memory
1GB
CPU
4 x 50 Mhz
X 6,000
X 1,800
Memory
6 TB
CPU
120 x 3 GHz
Transistors (CPU)
~ 1 million
Transistors (CPU)
2.6 Billion
Memory
48 TB
Cores
480 (8 x 4 x 15)
Note: Figures are for single servers
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 15Public
HANA, works in a fundamentally different way to other systems,
this initially looks complex, but is actually easy to understand.
Panic
Server Blade
CPU Chip DRAM
Memory
Server Blade
DRAM Memory
CPU Chip
CoreCore
500+ns
3Gb
ProcessorProcessor
L2 Cache
L1 Cache
…
80+ns
12.8Gb
SSD
Mechanical
Disk
CPU Chip …
DRAM Memory
Source: Intel
200,000 ns
0.5 Gb
10,000,000 ns
0.07 Gb
L3 Cache
L2 Cache
L1 Cache
4ns
1.5+ns
15ns
60+ns
Cache
100+ns
Data Access Latency
Bandwidth
50 Gb
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 16Public
CPU
L1 CACHE,
TABLE (1m)
L2 CACHE
KITCHEN FRIDGE (3m)
MAIN MEMORY (RAM) - LOCAL LONDON SHOP (30m)
L3 CACHE
THE GARAGE (9m)
A Useful Analogy
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 17Public
CPU
L1 CACHE,
TABLE (1m)
L2 CACHE
KITCHEN FRIDGE (3m)
MAIN MEMORY (RAM) - LOCAL LONDON SHOP (30m)
L3 CACHE
THE GARAGE (9m)
DISK – BREWERY , MILWAUKEE USA (6,000,000 metres)
A Useful Analogy
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 18Public
When you store tabular data you get to store it in one of two ways
A 10 € B 35 $ C 2 € D 40 € E 12 $
memory address
… or by row
A 10 … many columns €
B 35 $
C 2 €
D 40 €
E 12 $
Table of Information
… … … … …
… by column
A
B
C
D
E
10
35
2
40
12
… €
$
€
€
$
… … …
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 19Public
For rows, data is moved to the processor in ‘chunks’ but only a tiny
proportion of those ‘chunks’ are useful
A 10 € B 35 $ C 2 € D 40 € E 12 $
By Row
… … … … …
Processor
• Data laid out in sequence
• Summing numbers
• Only a small proportion of data moved
can be used
• Lots of ‘padding’
• Processor ‘spins its wheels’
3 Bn ticks / sec
fetch data
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 20Public
With data stored in columns “Every Byte Counts”
9
101
10
35
2
40
12
53
44
…
CPU Chip
Processor
L3 Cache
L2 Cache
L1 Cache
3 Bn ticks / sec
• Column is ‘dropped’ into on-chip
cache memory
• Caches kept filled – no ‘padding’
• Processor doesn’t have to wait
• Pick up multiple data items each time
you fetch data
• Hold data compressed
• Compute on compressed data
• Amazingly more efficient
• 100,000x speed improvements
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 21Public
Columnar data store benefits
 Highly dense data structures
 Multiple values for computation available at once
 Fully exploit modern CPU architectures
Can be joined with row-based data
Traditional Column Store Benefits still apply
 Compresses nicely
 Easy to add new columns non-disruptively – productivity
 Reduces data being processed to just those columns
accessed
Get full benefit by using them for everything
 Transaction processing
 Text
 Spatial
 Predictive, etc.
Columnar data stores offer huge benefits, if you can use them in a
general purpose way
Table - by column
A
B
C
D
E
10
35
2
40
12
… €
$
€
€
$
… … …
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 22Public
All those extra transistors can be used in a number of ways to
speed up computation and throughput – by 100,000x or more
CPU
DRAM
Logs
Persistence
L3 Cache
CPU Chip
3.4 GHz
• More processor
cores per chip
• Fast memory to
keep cores fed
L2 Cache
L1 Cache
Register
Core
• Give each Core its
own fast cache
• Computations in
registers seldom wait
• Thus make full use of
fast registers
nnnn
Register(s)
• Used to be1 = 1 data value
processed
• Make registers bigger, e.g.
256 bits instead of 32
• Add circuitry to let them
process multiple values at
a time – “vector
instructions”
• Add more registers!
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 23Public
HANA Memory Data Flow
L3 Cache
L2 Cache
L1 Cache
Register
nnnn
CPU
Core
CPU Chip
3.4 GHz
DRAM
Logs
Persistence
• Parallelism - Multi server / CPU / Core
• Vector / SIMD – multiple values / intr
• Scanning
5 Bn/core/s
• Aggregation
12m / core / s
• 600 Bn scans / single server / sec
• ALL Operations
• RDBMS
• Transaction
Update
• Text
• Predictive
• Spatial
• …
• Enables:
• Simplicity
• Productivity
Register(s)
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 24Public
What SAP means by “in-memory” can be very
different to what others may mean
L3 Cache
L2 Cache
L1 Cache
Register
nnnn
CPU
Core
CPU Chip
3.4 GHz
DRAM
Logs /Persistence
• HANA uses these memories here to feed vector
instructions, 100,000x performance advantage.
• HANA does this for ALL its processing.
• These memories, and the way they are used are
completely different things to DRAM
• To do this you need to start from scratch.
• Algorithms use these new parts of processors
• Others systems simply use RAM
memory to reduce disk I/O
• Traditional use
• Doesn’t get you to 100,000x
• Other uses limited (e.g. no OLTP)
Register(s)
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 25Public
Traditional table design in disc-based RDBMS – many tables
Complex assemblies of tables complicate BI development and change
….
Application-built secondary Index tables
Application-built aggregate tables
DBA-built aggregate tables
ABAP
VDM / SQL
DBA-built secondary Index tables
Main Master Record
Audit / Subset Records
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 26Public
S/4HANA design concept makes use of the much simpler and
elegant columnar data method
….
Validity Vector
Individual attributes each held in
their own separate column store
No indexes, each column is its own
index
A new value is inserted, the old one is not overwritten but
kept, marking it with a timestamp / validity marker.
One byte is written, one marked as ‘no longer current’
No Aggregates.
We have the ability to dynamically
produce aggregations ‘on the fly’
No complex audit tables. Audits
can be reconstructed as and when
needed from previous values
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 27Public
S/4HANA Example of dramatic simplification:
Logistics - Materials Management
MSTQH MSTEH
MSTBH MSSQH MSSAH
MSPRHMSLBH MSKUH
MSKAH
MKOLH
MCHBH MARDH
MARCHMSTQ MSTE
MSTB MSSQ MSSA
MSPR MSLB
MSKU MSKA
MKOLMCHBMKPF
MSEG
MARD
MARC
SAP Logistics table assembly and auxiliary aggregates and indices
28 tables not counting change log tables
MSEG
New
SAP sLog 1 table
MSEG
New
BEFORE HANA
AFTER Conversion to HANA
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 28Public
Example of massive simplification:
SAP Accounting powered by SAP HANA
Logistics
Document
CO
Document
FI
Document
Totals & Indices
Financial Accounting
Totals & Indices
Management Account.
From …
Stability
Processing
Analytics
CO Document
FI Document
Flexibility
Aggregation on the fly
via HANA views
Pre-Defined Aggregates
Flexibility
Stability
Processing
Analytics
Logical document
Logistics
Document
… To
Customer Benefits
• Harmonized internal
and external
reporting
• Significantly reduced
reconciliation effort
• Significantly reduced
memory
consumption
• Higher flexibility in
reporting and
analysis
• Central journal for
heterogeneous
system landscapes
29© 2016 SAP SE or an SAP affiliate company. All rights reserved.1 block = 10 data objects
30© 2016 SAP SE or an SAP affiliate company. All rights reserved.1 block = 10 data objects
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 31Public
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 32Public
SAP HANA, the great simplifier of enterprise software
SAP HANA
SAP Business
Warehouse
powered
by SAP HANA
SAP Business
Suite powered
by SAP HANA
SAP Simple
Finance
powered
by SAP HANA
 Real-time analysis
 Real-time reporting
 Real-time business
 OLAP and OLTP together
 SAP HANA Enterprise
Cloud for SAP Business
Suite on SAP HANA
 In-memory platform  Instant financial insight
 No aggregates
 Single source of truth
 Simplified data model
 New user experience
 Advanced processing
 Choice of deployment
2010/11 20132012 2014 2015
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 33Public
Some facts about SAP S/4HANA
10x
smaller data footprint
4x
less process steps
1800x
faster analytics & reporting
7x
higher throughput
1. Built on SAP HANA
2. ERP, CRM, SRM,SCM, PLM in one system
3. No locking, parallelism
4. Actual data (25%) and historical (75%)
5. Unlimited workload capacity
6. Predict, recommend, simulate
7. SAP HANA Cloud Platform extensions
8. SAP HANA multi-tenancy
9. All data: social, text, geo, graph processing
10. New SAP Fiori UX for any device (mobile, desktop, tablet)
Three deployment options: on-premise, public cloud, managed cloud
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 34Public
Revising our Assumptions about System Design
Things that are now much easier with HANA
• Simultaneous Real time update (Xm tx/s) and complex analysis on single copy of the data
• Doing BI directly on an operational system data in real time
• Developing using a much simpler data model – the logical data model
• Using sophisticated math for forecasting, predicting and simulation, with fast response
• Being able to make changes ‘on the fly’ rather than ‘N week mini-projects’
• Faster changes to simpler data models, metadata rather than physical data changes
• Interactive examination of data, even in production volumes
• Fast prototyping using production scale volumes
• What were batch processes become interactive …
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 35Public
Speed allows elimination of aggregates, indexes
this alone can result in a 90%+ reduction in complexity
Operational Data Store
Data Warehouse
Indexes
Aggregates
Copy
ETL
Operational Data Store
Data
Calculation Engine
Query Results
Query
SAP HANA
Query Results
Query
Copy
Data
Optional
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 36Public
On The Fly Transformation provides greatly increased flexibility for
sharing data
Source: In-Memory Data Management, Hasso Plattner/Alexander Zeier
Persistence
Layer
(Main
Memory)
View Layer
Presentation
Layer
Spread
Sheet
Business
Transaction
Any Software
Analytical
Application
View View
View
View
Log
View Layer Concept
View View View
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 37Public
Project Efficiency ~60% reduction, tasks shrink or are eliminated
Mth 2 Mth 3 Mth 4 Mth 5 Mth 6 Mth 7Mth 1
~7 month project
6 Weeks
Define
4 weeks
Develop
2 days data
3 weeks
Test
3-4 weeks
Rework
2 weeks
Tune
2 weeks
Backload
4 weeks
Volume Test
2-3 weeks
Report
2 weeks
Implement
Future Mode of
Operation (FMO)
SAP HANA
(column-store in-memory
DB)
4 weeks
Define
4 weeks
Develop/Test/Rework
unlimited data! 1 day Tune!
2 weeks
Report D’ment & Volume Test
1-2 weeks
Implement
1-3 days
Backload
• Replicate rather than
ETL
• Avoid physical model (4-
6 layers/ &
transformations)
• Single Modelling Tool
(Power Designer)
• Less Development
 Activate replication rather than ETL
 No physical layers
• Less Testing
 Replication easier to test
 Fewer transformations
• Faster, Iterative test/fix/test
• Model-driven development
• No index (re)-build
• No need to change
physical data model (e.g.
aggregations)
• No embedded calculation
• Only need to set
parameters
• Higher self-service/analysis
means less reports to build
No need to renew sematic
layer
• Virtual Model
 Easily transported
 Faster reload (no
intermediate physical
layers, in-memory)
~3 months project
• Virtual Model
• Replication or ETL
can go 50X faster
(e.g. BW PoC)
Example:
“We took an
analytic that took
us 6 months to
develop and we
redeployed it on
HANA in two
weeks. Results
come back so
quickly now, we
don’t have time to
get coffee”
Justin Replogle,
Director – IT,
Honeywell
Jul Aug SepJun
Current Mode of
Operation (CMO)
Traditional RDBMS
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 38Public
Simplified Stack, fewer parts, easier development
1. Fewer servers and storage
2. Less layers of software
3. Simpler Administration
4. Lower BI run cost
5. Faster time to deliver BI projects
6. Productivity - Tools ‘at your
fingertips’
7. Reduced ‘Shadow IT’ costs
The HANA architecture lends itself well to an initial TCO
impact analysis. Based on preliminary analysis with
customers, we established that the overall TCO impact of the
roadmap will be beneficial to operating costs. (i.e. excluding
the substantial business benefits from in-memory technology).
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 39Public
The SAP HANA Platform
A complete platform. Development, Administration, All styles of processing and data
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 40Public
S/4HANA the primary reason for HANA and the culmination of a five
year release process
SAP HANA
SAP Business
Warehouse
powered
by SAP HANA
SAP Business
Suite powered
by SAP HANA
SAP Simple
Finance
powered
by SAP HANA
 Real-time analysis
 Real-time reporting
 Real-time business
 OLAP and OLTP together
 SAP HANA Enterprise
Cloud for SAP Business
Suite on SAP HANA
 In-memory platform  Instant financial insight
 No aggregates
 Single source of truth
 Simplified data model
 New user experience
 Advanced processing
 Choice of deployment
2011 20132012 2014 2015
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 41Public
Click on picture in screen show mode to view video
https://www.youtube.com/watch?v=q7gAGBfaybQ
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 42Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 43Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 44Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 45Public
How might HANA simplification solve pressing problems?
Productivity
 Users
 Developers
Agility
 Faster response, time to
market
 Easier change
TCO
 Radical simplification of
IT landscape
NEXT STEPS
 Review requirements, look at them with
fresh eyes
 Revisit ‘keeps me awake at night’ issues
 Determine how many could be solved, or
assisted by the new capabilities of HANA
 Identify those that are now possible and
were not before
 Identify those that are hard / costly to meet
now but could be solved easier / quicker /
at less cost, reconsider how to deliver
them
© 2016 SAP SE or an SAP affiliate company. All rights reserved.
Thank You
Q&A
Henry Cook
SAP Database & Technology,
HANA Global Centre of Excellence
SAP (UK) Limited,
Clockhouse Place,
Bedfont Rd. ,Feltham,, Middlesex,
TW14 8HD
United Kingdom
Henry.Cook@sap.com
T +44 750 009 7478
https://blogs.saphana.com/2016/03/11/hana-the-why/
https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
HANA The Why Video Jan 2016.pptx
Mark Mitchell
SAP Database & Technology,
HANA Global Centre of Excellence
SAP (UK) Limited,
Clockhouse Place,
Bedfont Rd. ,Feltham,, Middlesex,
TW14 8HD
United Kingdom
m.mitchell@sap.com
T +44 208 917-6862
www.saphana.com
www.sap.com/hana
www.youtube.com/user/saphanaacademy
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 47Public
Speaker Notes Follow
• In order to make this presentation self contained the speaker notes for the slides are
included
• These can be printed or opened in a separate window to accompany the presentation
• They are also useful if you want to refresh your memory regarding a particular topic
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 48Public
HANA ‘The Why’
The purpose of this session is to remind ourselves of the reasons why HANA was invented, and
why it is unique. HANA represents a fundamentally new approach to large enterprise systems,
one that provides significant simplification. Because it is a new approach it overturns many of
the old assumptions we have about enterprise systems and causes changes in technology,
applications, systems design etc. Because of this it can be sometimes difficult to “see the forest
for the trees”.
It is a disruptive technology that is having a major effect on the marketplace. Witness the
number of competitors now scrambling to introduce HANA-like features.
It should be viewed in the same light as the introduction of the mainframe, the introduction of
client/server, the PC and the Internet, and will spark a whole generation of in-memory systems.
We’ll hear as we go through why HANA is special and differentiated and why we expect to
maintain a leadership position for the foreseeable future.
The scene shown is in Hawaii (you can almost feel the warm breeze come through the screen)
Hasso Plattner the inventor of HANA and our CTO at the time, Vishal Sikka have both visited
Hawaii (Hasso is a keen sailor), and in Hawaii there is a place called HANA Bay.
In fact when people talk about ‘the road to HANA’ you can see it here – the road just up from the
shoreline is the road to HANA Bay in Hawaii and if you do the trek to the other end of it you get
an award for doing so.
There are several stories about how HANA got names – one is that it was named after HANA
Bay, another that it informally stood for “Hasso’s Amazing New Architecture”.
Each year we have been taking HANA customers who are in the 100,000 Club to HANA Bay.
These are customers who have taken a piece of production work and used HANA to speed it up
by 100,000x or more. So, ‘your mileage may vary’ and we’d not guarantee these kinds of results
but speedups of 10x, 100x, 1,000x and more are very typical.
https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 49Public
A logical evolution each being necessary to provide significant new
business capability and escape the constraints of the past
Before we go back to the beginning lets just take stock of where we are with HANA.
As you’ll be aware SAP applications have gone through a series of generations, the original system called R/1
Was mainframe based as was R/2.
Twenty three years ago the applications in the Business Suite were being constrained, even when we used the
largest mainframes available.
Note that the ‘R’ always stood for ‘Real Time’ the idea that we could do business immediately with no
unnecessary waits. Remember that before this it was usual for applications to be purely batch, using decks of
cardboard punched cards or magnetic tape. The early mainframe versions used the newly available online
terminals allowing work to be done in real time with online transactions.
As larger organisations adopted SAP, and those organisations grew they outgrew the capacity of even the
largest mainframes.
So, in 1992 SAP made a bold move, and became one of the first large software vendors to move to a completely new and innovative architecture, the “Client /
Server” architecture. This allowed large applications to be broken into pieces and those pieces to be spread over different servers. Moreover these servers
could be less costly UNIX servers, rather than expensive mainframes. Where we wanted to deploy different modules, or different applications, Enterprise
Resource Planning, Customer Relationship Management and Supplier Relationship Management etc this could then be done by placing them on different
servers and networking them together. This allowed us to sidestep the constraints of a single mainframe server. There were other benefits too, the Clients in
the Client Server relationship no longer needed to be “dumb terminals” they could be powerful and flexible Personal Computers with attractive and easy to
use graphical user interfaces – we forget now just what a revolution this was .
As we said, for the past twenty three years this worked well, and represents the state of the art of the technology that has previously been available.
However, in our view current technology has limitations, particularly in the area of complexity, performance and flexibility. So, we set out to discover if there
was a fundamentally simpler way of implementing large systems. It turns out that there is, and this radical simplification is what we now know as HANA.
So, for a second time we are bringing in a revolutionary way of doing things, the move to in memory computing. Whilst we’ll continue to enhance and
maintain our existing Business Suite, we now have S4HANA the Business Suite which is able to fundamentally simplify how it works and how it can be used,
a revolutionary platform which brings significant new functionality, function that cannot be provided without using the unique capabilities of HANA.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 50Public
Accelerating business innovation through radical simplification
We’ve seen how HANA is now well established, we have introduced
it in a cautious and well planned manner, and it is realizing its
original vision. Now lets go back and explore why we set off down
this path.
At SAP, our job is to bring the best combination of break-through
technology innovations together to help you accelerate your
business agenda
With our history and focus on software innovation, SAP is uniquely
positioned to help you simplify your IT stack and accelerate
innovation
However, 7-8 years ago SAP found itself in a bit of a bind.
The ERP applications were large and complex with 15 month
release cycles.
We were surrounded by competition, many of them increasingly
more nimble than us.
We were dependent on others for data management, some of them
major competitors, and all of this complexity incurred great cost.
The only way we could see out of this would be if we could
radically simplify what we did, and this was the objective of the
research project that eventually produced HANA. This entailed
hundreds of developers working around the clock for 6-7 years – to
get to where we are now.
However having done this we are now poised to pull ahead, and
stay ahead of the market because we have a fundamentally simpler
and more effective base from which to build.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 51Public
Traditional ERP system architecture
Let’s just recap the history of ERP. Originally we ran on the mainframe and ran a
single application. Access was via online terminals (a major innovation at that time)
and everything was done in real-time. At the time we were serving mid-sized German
companies.
But then we sold to bigger companies, and those companies grew, as we did so we
needed to deal with “the multi’s”; multi-company, multi-language, multi-currency,
multi-application (remember with the mainframe terminals were tied to applications,
multiple apps meant multiple terminals)
This exceeded the capacity of the mainframe, so we made a big leap, with SAP R/3,
to a three tier Client Server architecture, splitting the Client Interface, the application
logic and the database access across different servers, thus spreading the load.
At the same time we now allowed the newly available PC front ends to access multiple
back end applications, instead of having to have one mainframe terminal for each
application we needed.
In the third step we see how this expanded as new applications were added, some of
which we wanted to be able to sell on their own without necessarily having the ERP
core, and at the same time we could mix and match servers to tune for performance
and also to make operations easier. At the top of the diagram we see the appearance of
separate management information systems, data marts and ‘spreadmarts’ the proliferation of spreadsheets that are actually being used as operational data
marts in the organisation. These were used by organisations to make SAP data available for querying and reporting.
Note that where these different applications were sold with ERP – as was usually the case – a subset of the ERP data was replicated within the applications
and there would be feedback processes that fed back to ERP too.
In the 4th step we see the addition of a dedicated server for the Business Warehouse, to make management information available from SAP applications, but
often this didn’t kill off the separate management information systems, in fact many organisation used BW as a mechanism to populate separate management
information systems. There might also be a separate generic Data Warehouse added, to combine SAP data to non-SAP data at the time it was easier to do this
by extracting the SAP data and taking it elsewhere rather than bringing the non-SAP data in (we’ll see that this has changed). More servers are added, more
data marts and more redundant copies of data – however remember that by going this route we were able to scale the systems and to optimise for individual
applications, and, at the time there was no alternative given the technology that was available.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 52Public
Transformation through Simplification
Returning for a moment to our strategic reasons for producing HANA this
diagram shows how a strategy of simplification is allow us, and can allow us
to innovate, to renew itself, without having to incur large amounts of
additional cost or divert resources from essential development of support.
By removing complexity, which saves effort (Services, admin), reduces the
amount of hardware required, through landscape simplification and can also
reduce the overall amount of software needed we can make room in the
budgets to do the new innovative things needed to take advantage of
opportunities and to stay relevant.
This is precisely the strategy that SAP has been following to escape form the
complexity and cost that was weighing us down and which is now allowing
us to pull ahead and stay ahead of our competition.
If you think about it this is the only way to be able to escape this complexity
and cost trap without adding massive extra cost.
One of my colleagues coined the term ‘transformation through
simplification’, if you simplify then you transform almost by default, in fact it
is hard not to, because you begin to find you are using less effort and have
less cost and these can be put to work doing the new things you need to do
to be able to compete.
Our colleague Norman Black termed this ‘transformation through
simplification’, and if you think about it if you simplify things you naturally
transform things – you can help but do that because you naturally start to do
things in the new, simpler, way.
Many organizations are in the situation that we were in 7-8 years ago, and
likewise many can now take advantage of what we have done with HANA,
Cloud, Mobile etc. to move to a simpler, more productive, more agile and less
costly way of working.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 53Public
HANA Simplification shows up as:
We’ll see that HANA brings a much simpler and more elegant approach to
large information systems. As we’ll see, we should not be surprised at this
as it was the prime objective in its development.
As we go through looking at the different aspects keep in mind that we are
expecting HANA to provide benefits in terms of productivity (for users and
developers), in agility and in reduction in costs. So, it is useful, every now
and again as we go through the various topics, to ask ourselves “why does
this improve productivity?”, “why would I be able to do things in a more
agile way?” and “how does this reduce costs?”. In most cases there are
obvious reasons why this would be so.
The key to understanding HANA is that it is fundamentally about
simplification and this expresses itself in several ways.
It can make end users and developers more productive, giving them results
quicker (orders of magnitude quicker), being able to develop new useful
business benefit bearing function faster and enabling them to do much
more work in a given period of time. For developers this is because the
system and data structures they are dealing with are fundamentally less complex. For end users they are able to use techniques not previously possible
and to get their answers interactively instead of having to wait minutes or
hours, thus they work more ‘in the flow’.
It makes people more agile by being able to respond faster, whether that is with instant results to and ad hoc query or being able to put together a new
requirement much faster than with existing technology, e.g. one customer quoted “we reproduced in two weeks with HANA an application it had taken six
months to build on the current systems – and HANA was a better system with much better performance.
Developers can bring new applications to production much faster, because the design process is much simpler, developers get instant feedback, and there
is less to design and develop.
Likewise, where HANA can be used to collapse multiple MIS systems into one, and combine one or more OLTP and OLAP systems together this can bring
about a massive simplification and cost saving in our IT landscape.
The aims of HANA are to simplify ,and in the process build a better and more profitable business.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 54Public
How do traditional systems impede benefits?
Before we go on its worth reflecting for a moment on what kinds of problems we
encounter in existing systems.
One of the things that Hasso and his team spotted early on was that a large proportion
of the data structures in an application have nothing to do with the business purpose of
that application. Rather they are the ‘supporting data structures’ that have to be added
to make the system perform, these are the familiar data pre-aggregates, indexes, cubes , materialised views,
pre-joins etc.
These can account anywhere from 60% to 95% of the data objects in the application.
That is, if you looked at the data model for the application (that is the
physical data model that is used to define all the data items) you’d find that on
ly a small percentage was basic business data, the data on Customers,
Transactions, Products etc. The remaining majority of the data items are there to help
ensure performance, for example we might have several layers of pre-aggregation of
sales data, daily, weekly, monthly quarterly and yearly. Likewise we probably have
pre-aggregation of the different intersections of data dimensions – product by
geography, with again multiple levels – product at individual item, product group,
product super-group, product class, these can be combined with geographic measures at the location, district, region and country level – all the permutations
and combinations. Somebody has to design, develop, test, maintain all of these !
This is a colossal overhead. But is one which we don’t notice because we’ve always had to do it and everyone has had the same problem so we assume that
this is ‘just the way it is’ it’s the standard way that we build systems.
Similarly, if we want to use multiple processing techniques say database, predictive and text, in current systems these are typically separate systems with
separate hardware and software ‘stacks’ to use them together you don’t simply invoke them you have to do an ‘application integration’ job to use them
together.
Another problem is simply response time. Wouldn't it be great if all operations came back in a second or two ? But they don’t, we are used to delays of
minutes hours or days. This destroys the ‘flow’ of management discussions and problems solving. It also makes it difficult to use sophisticated maths to solve
problems. – because it takes too long, again we regard this as normal.
It’s also hard to combine data from different sources. Even if we can reach out and read data from existing systems this typically doesn’t perform very well,
thus we find it hard to re-use data form existing systems and protect the investment in them – and it takes a long time to meet requirements.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 55Public
The Motivating Idea
We’ll continue this section with a picture from Hasso Plattner’s book on in-
memory computing that encapsulates the original aim in its invention. This
picture shows “The board meeting of the future”.
All the CXO’s are gathered around and they have all their information at their
fingertips. By the way, the same applies to all the staff beneath them, too, the
staff in branches, on the road, middle management, supervisors, field staff
and call centre operators, everybody has direct and immediate information.
Operational data is completely up to date, to the second, there is no need to
wait for complex month end runs we can keep pace with what customers are
doing and what is happening in the supply chain second by second.
Not only that but if an analysis is required it can be performed then and
there, within seconds, no need to gather data and meet again next month,
analysis no matter how complex can be performed sufficiently quickly to
provide input into the conversation as it happens, and that analysis is done
on complete information that is known to be up to date. There is no
disagreement between the operational and analytical systems, because they
are one and the same system.
Clearly it will take some time to get there, for customers to implement the
complete vision, but I think that you will soon see that this is where we will
get to, and why HANA and in-memory systems are uniquely able to get us
there. In fact, if you look at what we are doing with HANA Live and Simple
Finance you will see that we are a substantial way along the road. SAP HANA
provides a fundamentally new capability. A company that can operate in this
mode, with the agility, control and cost effectiveness that it implies will have
a significant advantage over a competitor company that does not.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 56Public
History 2007 – The Business Goals of HANA
What we see here is a slide that was used by Hasso Plattner, Chairman and
one of the original founders of SAP, at the launch of Business Suite on
HANA in 2013, and which outlines the original design objectives for HANA.
Hasso had founded his own university, the Hasso Plattner Institute in
collaboration with the University of Potsdam and was lecturing on enterprise
systems.
As part of this he discussed with his PhD students the design of enterprise
systems, and his students being bored with learning about old fashioned
ERP systems wanted something more modern to study and discuss. Hasso
set them the task of working out what a truly modern enterprise system
should look like if we could start with a clean sheet of paper and
incorporating all that we now know about systems design.
The title here talks about ERP, as this was the Business Suite on HANA
launch, but the actual objective was really to figure out what any modern
enterprise system should look like.
It is noticeable that the objectives are mostly business objectives. A good
way of thinking of this is that the objective was to remove “all the things that
have bugged us about big system for the last 20-30 years”.
For example we split OLTP and analytics apart over 30 years ago, because it
was not physically possible to combine them on the system, this was done
two generations ago and we’ve forgotten why we did it, it has just become
common practice. Likewise we’d like to get rid of batch, do BI on demand
against the transactional system, not need aggregates, be able to use heavy
duty maths etc. etc. We’d also like a sub-second response time, because that
is the way that human beings are wired, they are maximally productive when
they don’t have to put up with wait times and delays.
Of course, on the way through we’d expect to make use of techniques such
as parallel processing and holding all our data in memory.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 57Public
HANA Techniques: How to Build a New Kind of Enterprise System
This is a representation of many, but not all, of the main techniques pioneered by SAP
HANA. Some are adaptations of previously know techniques such as Massively Parallel
Processing and Column Stores, some, like the updatable column store were totally new
innovations. However, where existing techniques were used typically HANA would take
them a step further and, equally important, these techniques were combined in a
particularly innovative manner. This took over six years of focussed R&D.
The development method itself was innovative, it was a distributed development project
using top talent from around the world. When the developers in Europe went to sleep,
those in Asia would wake up and continue, and when they reached the end of their day
they’d hand over to developers in the USA and Canada. This went on for several years
with hundreds of developers keeping development going literally around the clock. Other
vendors typically took 10 elapsed years to do what we did, we did it in 3 by having a
24/7 continuous ‘shift system’
HANA is known as an ‘in memory database’ but it is worth noting that of all the
techniques only one of them mentions being in-memory – the “no disk” technique.
This is a necessary part of the system but as you can see is by no means the full story.
Also the way in which we use the techniques can be different. Column Store databases
had appeared before and shown their benefit and high efficiency by only transferring data in the subset of columns needed for a query and by allowing very
efficient compression. However, HANA takes this further, it uses column stores not just for these traditional reasons but to ensure that by filling the column
stores with binary encoded data (on the slide as “lightweight compression”) it can keep the Level 1,2 and 3 on chip caches filled and thus make full use of the
SIMD and vector instructions available in modern CPU’s. This is how we get 3.2Bn scans / core / second and that in turn means we can dispense with
aggregates and indexes.
HANA is a general purpose database, so another unique innovation is the ability to do transactional updates against the column store. As we mentioned, and
as you can see here ,there is a lot more to HANA than simply putting our data in memory.
Modern workloads consist of more than just database work, there is also text processing, spatial, planning, OLAP, etc. etc. Therefore we have specialist
engines for each of these that can collaborate within the in memory system, and we have an innovative language that allows us to process in all these different
languages whilst keeping the procedures together, being processed in memory, in parallel.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 58Public
https://www.youtube.com/watch?v=jB8rnZ-0dKw
Pepsi Ad
If we think about our slide “HANA Techniques: How to Build a New
Kind of Enterprise System” which shows all the techniques that are
used by HANA, both those that are adopted, and those that we
invented.
There is an analogy with an award winning series of adverts that Pepsi
had in the 1970’s, which summarised all the things you associate with
Pepsi in one high energy snappy sentence.
The theme of the ads, and the strapline that went with it was “Lip
smackin, Thirst Quenching, Ace Tasting, Motivating, Good Buzzing,
Cool Talking, High Walkin, Fast living, Ever Giving, Cool Fizzin Pepsi!”
Pepsi, fizzes, but they didn’t just call it “Fizzing Pepsi” - that would be
selling it short.
Glancing back at the previous slide we see that we have a “Massively
Parallel, Hyperthreaded, Column Based, Dictionary Compressed, CPU
Cache Aware, Vector Processing, General Purpose Processing, ACID
compliant, Persistent, Data Temperature Sensitive, transactional,
analytic, Relational, Predictive, Spatial, Graph, Planning, Text
Processing, In Memory Database HANA!”
Every one of those things contributes to the unique thing we’ve done.
We’ve shortened that for convenience to “The In Memory Database
HANA”, but we should never forget that there is a whole lot more to it
than just “In Memory”.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 59Public
HANA is designed to be more than just a Database
First lets do a quick recap on what HANA is – what would turn up on your floor or
in your cloud if you ordered one? Don’t worry about the detail for now, this is just
a quick note to fix in our minds what HANA aphysically is and isn’t, we can come
back to the detail later on.
HANA is an appliance a combination of software and hardware, the hardware being
available from multiple suppliers, though it can also be supplied now using cloud,
virtual machines or a ‘tailored data center configuration that can make use of
existing disk. It adheres to the appliance philosophy of being pre-designed,
pre-built, pre-tested and can be delivered ready to plug in and go. Where we differ
from other vendors is we believe it is no longer necessary to prescribe a single
type of expensive premium priced hardware. It has standard interfaces to the
outside world that make it easy to integrate with your IT estate.
HANA is not just a database. It has a very modern style of architecture in that it
contains a number of collaborating ‘engines’ each aimed at a particular style of
processing.
The goal is to enable all the different types of processing that applications might
need to do and to do them within the database, close to the data, and feeding the local caches within the modern CPUs so as to fully realize their enormous
processing potential.
This includes development life cycle tools.
Also, Text processing, the ability to do sophisticated text search and sentiment analysis.
There is support for multiple programming languages, including sophisticated SQL scripting languages and support for analytics, including business
function libraries and the open source ‘R’ statistical suite.
There is a planning engine for specialist planning functions, in particular aggregation and dis-aggregation.
This allows pretty much any type of application logic to be sunk down into the database – and thus fully benefit from the 100,000x processing gains.
We also support federation capability to seamlessly pull in data from remote databases and other systems such as Hadoop. This is natural, and just an
extension of our multi-engine design, our query planner and optimiser simply makes use of other ‘engines’ that are outside of the in-memory core.
So, what we’ll expect to see is certain applications benefit straight away, and others benefit more as they start to exploit this way of working.
OK so now we have a clear idea of what HANA is lets go back to why it was invented.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 60Public
There has been a revolution in hardware
Computer processor technology has changed out of all recognition over the
past few years. Twenty years ago we might have a system that had four
processors, each of which ‘ticked’ away at 50 million ticks per second
(50MegaHertz). We’d most likely have one Gigabyte of memory, the
processors would be built using one million transistors. This would be a
processor or approximately the Intel 486 class. Also these four processors
would actually be four separate chips.
These days the numbers are very different. The individual processors would
‘Tick’ at a rate of 3 billion ticks per second (3 GigaHertz), each chip would
have 2.6 billion transistors, each with multiple processing cores. To put this
in perspective, if transistors were people the old CPU’s with a million
transistors would be equivalent to a city such as Birmingham UK, or
Brussels Belgium. As single modern microchip would represent one third
the population of the planet. A single server might be 8 CPUs of 15
processing cores each, totalling 120 processing cores, and this would access
6 Terabytes of data, that is 6,000 Gigabytes.
So whereas 20 years ago a typical business system might have four separate
processors, each ticking away at 50 million ticks a second, pretty soon we
can see we’ll have a simple sever that will have 480 processing cores, each ticking away at 3.4 billion ticks per second.
So we have over a hundred times the processing elements and those processing elements tick away over seventy times faster ! Plus, as we’ll see there is
another wrinkle too which we’ll discuss in a minute – these new processors can process many data items at a time with each instruction, the old ones could
handle just one at a time. This means we have over 100,000 times more compute power available on a single simple server! And, of course, this is very cost
effective as it’s all commodity (Intel) microprocessors., cost performance diminished by something like five orders of magnitude.
Thus, computers now are completely different animals to what they were 20 years ago. In fact it is misleading to use the same words “CPU”, “Processor” etc
to describe these wildly different things. One of these modern servers is the equivalent of several data centres from 20 years ago.
The question is, how can you tap into this amazing computing power? As we’ll see you can do this but you have to do it in a completely different manner to
how traditional systems have worked . Remember, this trend has only really taken off in the past 5-8 years so any software written before then will typically not
be using the techniques needed to fully exploit this opportunity.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 61Public
HANA, works in a fundamentally different way to other systems,
this initially looks complex, but is actually easy to understand.
Firstly, don’t’ panic!. This may look very technical but it is actually very easy
to understand so please just go with the flow and stick with this! You will
understand it, and thus understand why we are so different, and so are able
to deliver major new business capabilities – plus you’ll be able to impress
people at dinner parties with your knowledge of modern microelectronics! 
This diagram, using information from Intel, shows the relative access time and
data speeds for different parts of a typical computer system. It’s a little bit
out of date, but the principle idea remains the same.
OK so what did we do with all these transistors ? The diagram shows three
Intel ‘Xeon’ chips, these are the blue boxes. In the middle chip we are
looking inside the chip to see two things, multiple processing cores and the
special memory caches (Level 1, Level 2, Level 3) contained in the chip.
CPUs used these extra transistors to have more cores, more processing
elements on each chip, starting with two, then four then six, we’re now up to eighteen cores per chip and we expect this to increase further. So each chip now
contains many processing elements.
A modern system is made up of different parts that are interconnected,
typically CPU, memory and disks. In addition different processors can also
talk to each other. There is the CPU, the memory embedded in the CPU, the
separate RAM memory, other processors and disks – both hard disks and
solid state disks. The red labels shows the length of time it takes to get to the data and the green shows the speed in Gigabytes per second to transfer data. A
nanosecond is one thousand millionth of a second, or one billionth of second. Mechanical or external devices can’t keep up with speed of modern silicon.
To get data from the fast memory caches (levels 1,2, and 3) on the chip takes just 1.5 or 4 or 15 nanoseconds – very, very fast. But to get data from a hard drive
takes an enormous time - ten million nanoseconds. Even Solid State Disks,external devices, take 200,000 nano-seconds.
So to keep all those many processing cores fed with data and thus use their full amazing processing potential, then we need to use the on board cache
memories, and to keep these cache memories full. In order to do this the software running on the chip has to be aware that these cache’s are available and to
be written in a way that can fully exploit them, this is what we mean by ‘cache aware’. Software that was written before these memories appeared don’t know
they are there and can’t exploit them, you need change the way the software is written to make full use of them, its hard to ‘retro fit’ this way of working onto a
traditional system. Plus we can do this very cost effectively using commodity CPU and memory chips.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 62Public
A Useful Analogy
We are not good at thinking in terms of billionths of a second, since that
is so far away from our day to day experience. So here is a good analogy
thought up by one of my German colleagues. Imagine we are sitting in a
house in London, enjoying a drink. In this analogy we substitute beer for
data.
The Beer you are consuming is the data in the CPU, it is immediately
available and being processed.
The beer in Level 1 cache is the beer on the table, within easy reach, 1
metre away.
If we need more then we can go to the kitchen refrigerator, 4 metres away,
this is like level 2 cache
Then there’s the refrigerator in our garage not more than 15 metres away
- level 3 cache.
Up to this point we are still just using beer in our house (that is data from
memory on the CPU chip) we have not even yet gone to DRAM, that is left
our premises.
If we need more than this then we can go down the street not more than
40 metres away, fortunately we are next door to a liquor store – that’s our
RAM memory.
But what happens if we run out of beer (data) and have to go further to
the bulk store –to the brewery warehouse? – the equivalent of the hard
drive.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 63Public
A Useful Analogy
What happens if we run out of beer (data) and have to go further to the
bulk store –to the brewery warehouse– the equivalent of the hard drive. In
that case we have to go to Milwaukee, USA – 6 million metres, or 6,000 Km
away !!!
(Of course if we wanted to save some time we could use SSD, and just go
to the south coast of the UK! , that would reduce the distance to just 120
kilometres to get our next beer.
What this shows is the huge difference in the ability of silicon to process
data and the ability of mechanical devices to feed them – all of which has
happened in the last 7-8 years. Software written before then cannot exploit
these features because they don’t know they’re there. Where these
techniques are starting to be used by others they are typically ‘bolt-ons’ to
existing complex systems and have various restrictions imposed on them.
Rough approximations – but they give a good sense for the relative
distance. (Check for current numbers, they are improving all the time)
ns m km
CPU 0 0.00 0.00
L1 1.5 1.00 0.00
L2 4 2.67 0.00
L3 15 10.00 0.01
RAM 60 40.00 0.04
SSD 200,000 133,333.33 133.33
HDD 10,000,000 6,666,666.67 6,666.67
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 64Public
When you store tabular data you get to store it in one of two ways
We’ve now established that to get access to the amazing speed of modern
processors we have to use all those multiple cores, and feed them via the
cache memories held within the chips.
Column Based data stores are one key technique that helps us do our
work in-memory, they have become both proven and popular in recent
years.
We tend to hold our data in tabular format, consisting of rows and
columns, this is the format used by all relational databases, and this is the
way HANA represents data too, in this very familiar and standard format.
When you store any data in memory, or on disk, you need to do this in
some kind of linear sequence where data bytes are strung out one after
another.
You can either store the data row by row (most databases do this).
Or you can store the data column by column. We see this illustrated
above, we’ll now explore the implications for each, and most importantly
how this affects our ability to exploit these modern advances in computer
chips. This may not be immediately obvious, but it soon will be.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 65Public
For rows, data is moved to the processor in ‘chunks’ but only a tiny
proportion of those ‘chunks’ are useful
Here we see the data laid out and physically stored in rows, in the more
traditional manner we’ve used for decades.
Using this row based format we have to skip over the intervening fields to
get the values we want. E.g. if we want to sum the number fields highlighted
with red boxes above, first we read the first one, then have to skip over
some fields to find the second one, then skip over some more to find the
third and so on. These rows can be hundreds of attributes long, each row
may be hundreds or thousands of bytes, 1,000 bytes would not be unusual.
Processors typically fetch data in chunks, and bring them to the processor
to have computations done on them.
In this diagram the alternating blue and green lines show the successive
‘memory fetches’ which are retrieving data ready for computation to take
place.
A processor typically fetches data from cache memory 64 bytes at a time.
But a row may be 200, 300, 500 or more bytes long. Therefore it is going to
take many fetches to get to the next useful value to be added, so most of
the time we’re skipping over ‘padding’ between the useful values. All the
while this is going on the processor is ‘spinning its wheels’, ticking away
waiting for the next chunk of data that has a useful value contained within it
to operate on.
So, to run at full speed and get the maximum out of these fast chips its not
enough to have many fast processors, we also need to make sure that the
next data that the fast processor wants to process is sitting waiting for it in
the cache memory and will be retrieved as soon as the processor is ready to
process it.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 66Public
With data stored in columns “Every Byte Counts”
Now consider the column based format. Here the number fields are held
together one after another, in one big column, as are the other column values.
This leads to a very ‘dense’ data format that is easy to compress and where
‘every byte of data counts’.
We can see how it is easy to take a whole column (which may be millions or
billions of items) and simply feed it into the on-chip memories in a
continuous flow. In this way the on chip memories are continually kept full.
A processor is never kept waiting, each time it is ready to process more data
there is a cache full of data close by, thus we make full use of our fast modern
CPU processor.
Thus Column based storage is not just a way of efficiently organizing data –
but it is key to being able to feed modern CPU’s with sufficient data to keep
them busy, every byte in a column store counts so if we can fill our local
caches with it we can process them very fast – on top of the benefits we
might get from compression.
The CPU cores are ‘ticking away’ at 3-3.4 billion ticks per second, and we are
making full use of that incredible speed. Not only that but processors now
have special instructions that can processes many data items at a time, say
10 or 20. We can do this because all the values are sitting tightly packed
together, so we can grab them several at a time.
With this design each time the processor is ready for another 10 tightly packed data items they are already sitting waiting in the cache memory on the CPU, our
very fast CPU cores, which can process multiple values at a time never have to wait for data, and thus we get the full use of them. This is where we get the
100,000x performance increase that is the key to everything else we do. With this 100,000x performance increase we can do away with the need for aggregates
and indexes and thus massively simplify how we can build our applications.
(Note that simply being a column store does not mean another database can do what we can. Column stores were already used before, but simply because
they reduce the need for disk IO when the columns are on disk. If we query a table of 100 columns but only need to process say two columns then we only
need to retrieve 2% of the data in the table, this speeds things up by 50x (100% / 2%) but does not get anywhere near the performance we see by using our
techniques). This is the key reason we use column based data – because of the fit with modern microchip architectures.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 67Public
Columnar data stores offer huge benefits, if you can use them in a
general purpose way
Of course using column store data also provides us with the benefits of
easier compression (because all the data in a column is of the same type)
and the ability to add new columns non disruptively that is without
affecting those already there.
Column stores had already come into use for analytic applications
they were so efficient for this kind of working, the data compressed well,
the data was tightly packed together and you’d only retrieve from disk
those columns mentioned by a query. So if our query only looks at three
columns in a hundred column data we only have to scan three percent of
the columns. This saves a huge amount of disk IO and data movement,
hence the query speed up. But it doesn’t get us anywhere near the
100,000x speedup we see through the CPU cache working.
But it is the ability to use the local cache memory for our main
computation, and thus make full use of the potential of modern
microchips, that is the important concept here.
In order to be able to fully take advantage of modern CPU’s in this way we
need to be able to use this technique across a full range of workloads.
OLTP, OLAP, Text, Predictive, and it turns out that this technique is suited to all of these too, of course you need to design your system from the
ground up to do this.
In the past column based storage has performed poorly at updating, therefore SAP invented a way of doing high speed updates against a column store – this
is a unique capability, and it is this which makes the use of column stores general purpose, and we can also use them for text processing, spatial, graph,
predictive etc. etc – and any mix of them.
This means that whatever components of a modern workload we have we can get the full benefit of the modern CPUs across the full range of workload we
might wish to do. All data is held this way, and all processing makes use of it.
Other systems are starting to use these techniques but often they are ‘bolted on’ to the existing database, so all their traditional cost and complexity are still
there. You have to nominate which data to hold in memory, usually its used only for read only queries, you have to do your updating somewhere else and other
styles of processing like text, spatial, predictive can’t use these techniques. So you don’t get the simplicity of design and development or the runtime speed
advantages.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 68Public
All those extra transistors can be used in a number of ways to
speed up computation and throughput – by 100,000x or more
So, to summarise, and see the whole flow within the system, we’ve seen that
whilst we cannot speed processors up any further we can put more and
more transistors on them, lets consider how we might increase processing
power by using those extra transistors.
Firstly we can put more than one processing core on a single CPU chip. This
is now well established, we started with dual core chips, then went to four,
six, ten and now we have fifteen. More processors mean more computation
done – provided our software can exploit them .
We want to keep those cores busy, so we can use other sets of those
transistors to implement very fast memory very close to the cores, actually
on the CPU chip itself, this means cores seldom have to wait for their next
batch of data provided the software knows how to do this.
Likewise if we look at each individual core we can see that we could add
some local cache to each core, in fact we can implement two levels of cache,
one feeding the next, that means that within each core, when we are moving
data into the processing registers the data is highly likely to be right there,
so the computer instructions never have to wait for data, its been
pre-positioned into those very fast memory caches right next to the
registers. If we go down the next level we can see opportunities to turn
extra transistors into processing power there too. Traditionally a processing element, using its register would process one value at a time. An individual
computer instruction would load a single value, do something with it, like add another single value or compare it to another single value, then take the single
value result and store it somewhere.
But what if we used our extra transistors to widen the register, say from 32 bits to 256 bits, and what if we implemented special instructions that could load,
add and store multiple values into the register? Then with each instruction we could process many more times data. Of course we’d be loading data values at
a very high rate, processing them ‘N at a time’ – but we can rely on our close memory caches to have the data we need ready so we don’t spin our wheels
waiting for data, again provided the computer software knows about these features and how to use them.
Software created 8 or more years ago will typically not be able to exploit these features, because they did not exist at the time. This would be very difficult ot
retro-fit to an existing row-based and disk based database – you really do have to start with a blank sheet of paper.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 69Public
HANA Memory Data Flow
In this view we show the flow of data from the solid state memory (DRAM
storage) into the registers where its processed.
We can see from this how it is essential that we hold our data in columns.
If we do this we can readily see how we can have a stream of compact,
tightly packed data values constantly flowing into the Level 3, level 2 and level 1 cache’s on the CPU chip and within the processing cores, keeping them filled
so that the processors never have to wait for more data.
Modern CPU’s have registers that are wide, 128 or 256 bits and special
instructions that can process many values at once if they are packed into
a register together, so we might be able to process data, with a single
instruction, ten or more at a time!
Whenever the register needs another 10 values, say, then they are ready
waiting. This means that the very fast (3.4Ghz, or 3.4 Billion ‘ticks per
second’) CPU’s we have available can be fully utilised because they always
have data ready to process and thus, unlike other software, we realised
their full power.
To make use of vector processing we have to compute with the data held
in binary format – and we do this by using dictionary compression which both compresses to save space but more importantly provides us with a binary
representation of the data that we can compute with. For example when we execute comparisons where we need the order of values to be preserved.
If the data is not compressed in this way we cannot make use of the vector instructions and we would not be able to keep the level 1,2,3 caches filled with data
ready for the CPU’s (if the data is not in binary format and is therefore larger then it takes longer to transfer the data or put it in the right format, whilst this is
happening the CPU would be idle, twiddling its thumbs - HANA’s dictionary compressed cache aware processing avoids this loss of efficiency.
The dictionary compression also makes the data smaller of course, so we can fit more in the DRAM and also more in the CPU caches – but this is really a
beneficial side effect that makes our in-memory system even more economic by needing less storage. The key reason we use dictionary compressed data is
this it allows us to follow the chain of logic that enables us to use the very fast and efficient vector processing we mentioned at the beginning.
We have customers that report processing speedups of 100,000x or more.
We can ‘spend’ some of this huge speed advantage by trading it off against the need to pre-compute aggregates – if we do away with these then we do away
with 60-95% or more of the complexity of the application it becomes much simpler, smaller and more elegant and the business benefits of productivity, agility
and TCO flow directly – but to do this you have to do all of that which we’ve described.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 70Public
What SAP means by “in-memory” can be very
different to what others may mean
So, from the preceding information we can see that when we talk about HANA being an
“in memory” machine we are talking about completely different types of memory being
used in completely different ways to traditional DRAM.
The key point here is that the cache memories in the CPU and the ordinary Dynamic
RAM (“D-RAM”) memories are completely different things. HANA is designed from the
ground up to do its computation in the cache memories and thus gain access to very
high speed computation, and we do this for all our processing. Other systems simply
put data in DRAM to save IO’s or make limited use of the CPU cache’s just for limited
tasks. They are completely different techniques.
In the past database vendors have used larger amounts of DRAM to hold data from
disk to avoid having to incur the overhead and time delay of disks. There is nothing
wrong with this, and it gives you a good speed boost. This is a well understood
technique that has been used for many years. The more data you hold in the memory
buffers in DRAM the fewer times you have to incur the delay waiting for disk and
everything speeds up.
But it does not give you 100,000x speed boost we see with HANA, the speed boost that is needed to considerably simplify our systems.
When we talk about “in-memory” in relation to HANA we are talking about the Level 1, 2 and 3 caches inside the CPU, we mean taking the column store data and
ensuring the caches are constantly kept full, and being able to take our dictionary compressed binary data many values at a time to fully utilise the vector (SIMD)
instructions that can process data many items at a time with each instruction.
This is a fundamentally different way of doing our computing and to do it we needed to design HANA from the ground up.
Another key point is that not only do we fully exploit these new CPU features that have only appeared in the last 7-8 years but we do this for ALL our
processing, that is we have a system that can take advantage of this much more efficient way of doing things for relational database, predictive, planning, text
processing, etc. etc. What is even better we have invented a way of being able to store the data in a way that allows us to do transactional processing on it too –
so we get the speed advantage and can apply it to our whole workload. This means that we don’t have to think about which part of the system to use for what
part of the workload, we can use it for everything we may wish to do - a considerable design simplification and source of productivity and agility.
So simply saying that something works “in-memory” is not sufficient, we then have to ask “Which memory do you mean exactly, how do you use it and what can
you use it for?” The answers you’ll receive will be very different for different products.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 71Public
Traditional table design in disc-based RDBMS – many tables
Complex assemblies of tables complicate BI development and change
Lets just recap on what is happening under the surface of the traditional way of doing
things, where data is stored as ‘rows’ or records of data, with many attributes held for a
row. Here we see a large master record containing many attributes surrounded by other
tables, lets explore what those tables are. The master table itself will have many attributes,
possibly several hundred, they may represent attributes used in many different industries,
many not relevant to a particular company.
The turquoise block represents a one character indicator that we’ve just chanced, say “1”
to “2”. To do this we have to update the entire record, which may be 2,000 characters long
or more. We also have to write an equivalent sized audit record, or maybe we can get away
with keeping just a subset of the record, as we see at the top.
On the right hand side we see extra tables where we keep pre-aggregated and summed
results, but of course if we update the base record we may have to update these tables too
as various SUM’s and COUNTs may have changed, these are application tables, but in the
background we may also have similar tables helping the database manager produce
results and they have to be updated too. We see our turquoise blobs appear where
updates may be necessary. On the left hand side we see a similar structure but this time
for secondary indexes, that is if we have many invoices mentioning products we may also
wish to quickly find out which products are in which invoices, so we have tables,
maintained by the application, which tell us this, these may also need to be updated.
Also we may have similar indexes maintained by the database rather than the application and these need to be updated too.
To summarise, we need a multi-table assembly to get the required business information, and we need auxiliary data sets supporting disc-based row table update
to achieve acceptable write and read performance, these are complex and time consuming to optimise. We also needed a complex ABAP development and/or
complex data model to be able to maintain these structures and keep them synchronized, and also to navigate them in order to do reporting and querying upon
them.
This is Inherently, complex and require exhausting unit and integration testing efforts across the complex data set and table assembly to check any “small”
enhancement request of the business user community, almost making it impossible for any ad-hoc, on the fly change, and also making it very difficult to provide
timely changes to the system and making very costly the efforts to deploy upgrades and enhancement releases. This way of working maximizes the constraints to
deliver innovations to the business. However, with traditional disk and row based technology there is not choice this is the way it has to be done.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 72Public
S/4HANA design concept makes use of the much simpler and
elegant columnar data method
S/4HANA design concept is based on inserts only the single changed value
only into the in-memory columnar store for the field where the change
occurred it works like this.
Each attribute has its own column of data, it no longer physically shares a row
with other attributes (but we know which attributes constitute a row – they are
the N’th value in each column for the N’th row).
When an attribute is modified we don’t overwrite the old value, instead we
insert a new value and mark the old one as changed. We have written one
piece of information and modified another – that’s it, we’ve avoid to write
whole rows.
Because we still have the old value we can reconstruct the old rows for audit
purpose at any time, in fact we can do that for any point in time. Thus we don’t
need all the complex audit data structures of whole extra rows or partial rows.
This is all done live and in memory, of course, we write the change to a
permanent log in case we need to recover but that is fast and simple.
(Note: For the technically minded SAP HANA adds a validity vector to the
tuple, the individual value, to identify the most recent record from a time line
perspective, the validity bit is read during a query execution to speed up the
overall read dramatically.
The bit vector is a technique to speed up the read of the differential buffer where changes are kept for the uncompressed delta changes, so that the combined
read across the differential buffer and the compressed main memory is much faster than the read of a row record on disc or even in memory, as the system can
perform a table scan. For each record, this validity vector stores a single bit that indicates if the record at this position is valid or not. The vector remains
uncompressed to ensure proper read and write speed).
We don’t need indexes as every column effectively works as its own index. If we want to pick out particular values we scan the dictionary for that column, pick
out the bit compressed values we need then do an ultrafast scan of the column, no separate data structures are needed.
Likewise we no longer need to pre-aggregate data, we have sufficient speed to be able to roll up and data we wish on the fly.
This is pretty abstract so lets look at the practical consequences of this by looking at our Logistics application.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 73Public
S/4HANA Example of dramatic simplification:
Logistics - Materials Management
Here is the Logistics application before and after
conversion to the simplified version using HANA.
At the top of the slide we see the original data structures
needed. These include all of the kinds of things that we
spoke of earlier, not just the main table (MSEG) but all the
other tables needed to support it, all the layers or
aggregation, secondary indexes, etc
This came to a total of 28 tables to be created, maintained,
tuned, administered and changed over time.
At the bottom we see the new application data structures,
basically a single table. Think of the implications of this in
how easy it is to introduce new innovations and how
simple it is to create new reports that use the data without
having to worry about the impact on the other 27 tables
that are now no longer there. Think about now less error
prone this is, how quickly changes can be made and how
much less testing and administration is needed.
Therefore how much easier it will be for us to introduce
new innovations on top of the ones already enabled by
HANA. This is what allows us to pull ahead of the market
and stay there. We will simply be better able to innovate
than companies reliant upon previous generation
technology, and our customers will benefit by being able
to differentiate themselves in ways that can’t be provided
by other software applications.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 74Public
Example of massive simplification:
SAP Accounting powered by SAP HANA
This exploitation of modern microchips and the incredible speed that it makes
possible allows us to do away with pre-aggregated data, indexes and other
‘supporting data structures’.
A good illustration of this is what we have done with our Simple Finance
application. Please note: Whether you are planning to use, or considering our
Finance application is not the point here, here we are using it simply as an
example of how we can simplify a major application.
It was the first of the SAP ERP applications to be re-engineered to use the
unique capabilities of HANA. It is one of our most important innovations, and
was introduced in 2014. It is essentially an entirely new architecture for SAP
ERP Financials.
Its main characteristics are:.
• Convergence of FI and CO. There will be one logical document that is the
common basis for both regulatory and managerial accounting. Thereby, a
much higher degree of consistency between FI and CO has been created
abolishing the need for time consuming and error prone manual
reconciliation. Since they are now the same record they can’t be different.
• Abolishment of pre-defined aggregates. All aggregation and transaction
processing is now performed ‘on the fly’ based on HANA Live views. The
totals and indices in FI and CO are a matter of the past. This fact is a further
move to preserve data consistency throughout the entire technology stack.
As a side effect and due to HANA, memory consumption is drastically
reduced, helping to reduce TCO.
More flexible reporting. As a beneficial side effect, reports can now be configured by the end user in a very flexible way without requiring any assistance from IT.
We can report on any attribute.
Large customers in particular are interested in leveraging Simple Finance on SAP HANA for harmonized real-time internal reporting (the so-called central journal
/ finance approach) – prior to consolidating their full system landscape into a single ERP system.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 75Public
Finance System – Before HANA
Here’s a good concrete illustration of what we mean by complexity, and
how we provide radical simplification.
Under the Financial application there are about 23,000 ‘data objects’
these can be tables, aggregates, indexes etc.
Every little blue rectangle on this diagram represents ten objects – so in
fact this diagram is considerably simplified !
Now lets consider a business operation such as a corporate
restructuring, an acquisition, or a demerger. How will that affect the
values that have been pre-calculated and deposited in all these physical
data objects ?
The only way to be sure is for someone to go through them and think
about them, design any changes, develop the change mechanism,
unload and reload and recalculate the data. Clearly a very onerous task
and one that therefore is a significant impediment to making
organisational change.
Remember, when we are evaluating say three scenarios we need to work
out which subset of these physical data objects are needed to represent
those scenarios. We have to replicate these three times and load them
with data structured according to that option. When we’ve done that is
scenario three better than scenario one because it’s better, or does it just
look better because something changed in the data in the five weeks it
took to load everything?
By the way, when we do this we also need to worry about which objects
can co-exist with other objects on our disk storage, which can and
should be separated in order to make loading or querying more efficient.
So what is the implication of the simplification that we have done ?
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 76Public
Finance System – After HANA
In the simpler Finance application we have removed 95% of all the data
objects !!!
That is we have removed 95% of all the things we have to worry about
and maintain in the application.
The function remains exactly the same. In fact we can now do things we
could not do before because we have been freed from being slowed
down by the complexity.
It is intuitively obvious that the diagram above will be significantly easier
to work with, we will be able to deliver function much faster, at lest cost
and with much greater flexibility.
If we want to change company structures and roll them up another way
then we just go ahead and do it – there is no need to unload and reload
the data.
If we want to represent the company in several different ways we can do
that too because there is no cost in defining the different structures
because we don’t’ have to store the data.
This will help us be able to change our organisations ‘on the fly’ and
move toward continuous intercompany reconciliations, and continuous
close – and many more functions besides.
It is worth flicking back and forth between the last page and this and just
thinking about the implications for development, for system change and
administration.
SAP invented HANA to make it possible for us to introduce new
applications and change our existing ones much faster and more easily
and from the above you can see how we succeeded – and you can do the
same to make your organisation more productive and agile – and to
lower costs.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 77Public
SAP HANA
More than just a database – a platform
Of course we need to build these capabilities into a complete working
system, and we have done this with what we call the SAP HANA Platform.
At its core is the in memory computing engine we’ve discussed, but this
is complemented by extending this with large scale disk based column
stores, stream processors, data replication, extract transform and load
capabilities and easy integration with open source systems such as
Hadoop. The modern architecture that contains a number of different
complementary engines. The core engines run in memory to provide the
simplicity and agility that in-memory gives, but it is extended by many
other engines external to the in-memory core, engines for multi-tier
storage, Hadoop as an engine for large volumes of unstructured data,
engines that happen to sit in legacy systems, and specialist engines
such as for streaming or for communications with mobile devices.
This architecture is simple, elegant and modern. It allows for any mix of
processing, provides very cost effective IT and yet gives you the
productivity and agility advantages of in-memory.
These are generic advantages and the can be used for both SAP
applications, non-SAP applications or mixtures of the two.
This allows us to use the platform to address a wide range of information
requirements and match the right kind of tool to the right job. It allows us
to very easily integrate the SAP HANA Platform into existing IT estates,
complementing what is already there, and then meeting requirements in
a simpler and more cost effective manner.
We’ll not dwell on this now, as SAP has many comprehensive
presentations to take you through the SAP HANA Platform, the point here
is that having invented a fundamentally simpler and more effective way
of building enterprise systems we have built this out into a complete
platform.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 78Public
SAP HANA, the great simplifier of enterprise software
The vision of SAP HANA has been introduced in cautious well considered steps over a
number of years, introducing new function gradually and fully exercising the core
platform. In 2011 (actually 2010) we released HANA simply as a stand alone analytical
platform.
From 2012 we could drive real-time operational reporting and simplify how they run
their SAP BW. SAP BW powered by SAP HANA has already exceeded +1600 customers
From 2013 our customers could start working as real-time business and simplify how
they run their SAP Business Suite by bringing transactions and analytics together into
a single in-memory platform. SAP Business Suite powered by SAP HANA has exceeded
+1800 customers in only two years (one of the fastest growing product in SAP’s
history)
Business Suite, specifically enabled by HANA provides our customers with significant
benefits through massive simplification, much greater flexibility and much greater
performance – and all at less cost. More than 5800 customers have already adopted the
platform to do just that with our existing applications optimized to run on SAP HANA:
In June 2014 we enabled people to drive real-time insight across finance and simplify how they run their finance system with SAP Simple Finance powered by SAP
HANA, simplifying the application by 95%, removing 22,000 data objects, removing aggregates, data redundancies and replication
With S/4HANA, we are building on the success of the SAP Business Suite powered by SAP HANA with a completely new suite. S4HANA is only built on SAP HANA
because only HANA can deliver the level of simplicity required by customers today:
Now in 2015 S/4HANA is natively built on SAP HANA for massive simplifications (simplified data model: no indices, no aggregates, no redundancies), also using
SAP Fiori offering an integrated user experience with modern usability (these interfaces are role-based, with 3 steps max to get the job done, developed ‘mobile-
first’, and offering a consistent experience across the various Lines of Business.
S/4HANA is natively built for advanced innovations (e.g. new applications predicting, recommending, simulating / processing of text, geo data, graphs, genomes).
Also, S/4HANA is natively engineered for providing choice of deployment (on-premise, cloud, hybrid). In addition S/4HANA fully leverages the new multi-tenancy
functionality enabled by SAP HANA
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 79Public
Some facts about SAP S/4HANA
Now that we’ve established how we’ve been able to do something very
different, to fundamentally simplify enterprise systems lets look at how this
expresses itself in our new generation of applications, and see how a few
current facts about S/4HANA speak for themselves.
It is clear form these that we have succeeded in the vision that we set out to
realise. S/4HANA is a new code base, and with Fiori a new User Interface (UX),
plus new ‘guided configurations’ that help with adoption.
These simplified applications are seamlessly integrated to offer one solution
for a wide range of business problems. All these application get completely a
modern web-based Fiori User Experience – ready for real cloud consumption
All these capability taken together makes these applications a completely new
product: with new database, data management, technology and front-end.
A major achievement is the ability to reintegrate ERP, CRM, SRM, SCM, PLM in
one system - to save hardware costs, operational costs and time.
This is possible because S/4HANA has a 10x smaller data footprint compared to
a best-in-class business suite on traditional database. But remember now we
implement these on platforms that are large scalable clusters of processors, we
have a unified system but one that is easily and linearly scalable.
Thus we can see that in some ways we have come full circle, back to the
integration of multiple modules and applications on a single platform, but now
considerably simplified and with much greater performance.
Another example is less process steps: Processing receivables app in SAP GUI
vs. FIORI/Simple Finance: Number of screen changes 8 --> 2 (4x)
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 80Public
Revising our Assumptions about System Design
Things that are now much easier with HANA
Before we go on its worth doing a quick checkpoint. We’ve explained how HANA was
specifically designed to solve major problems with existing systems. Some of what we’ve
said about HANA is counter intuitive, because it goes against twenty or more years of
experience with computers. There are certain assumptions we make about systems
design because those assumptions are based on long experience. But here we must not
just learn about HANA but un-learn many of those assumptions because they are no
longer true.If we have a requirement that needs high speed transactional update against
a data store that we want to simultaneously do complex analysis against then we can – in
the past we’d have to separate these two pieces of the requirements into different parts
of the system, and OLTP updating system and a complex query system – and then
implement logic to connect them, keep them in sync etc. That is no longer true, HANA
allows us to combine both on true single copy of the data.
Thus we can do BI queries against the operational data, either a custom application
we’ve written, or an SAP application such as the Business Suite.
We can develop using a simple data model that consists of pretty much the logical
business data model, rather than having to embellish it with lots of added aggregations
and indexes. This makes for much faster development and much easier modification.
We can implement more requirements and keep the system up to date with changed requirements – more requirements met equals more benefit.
Large scale math can be used, often ‘interactively’. Previously we’d shied away from this because response times would be too long. But now we can insert
heavy duty math into what look like interactive OLTP transactions and get superior results from more sophisticated analysis.
Because of the ease of change we can commit to keeping the application up to date more easily, supporting a faster pace of change, for example modifications
to analytic rule-bases or constantly changing reports.
We can check out our data, even at production volumes in seconds. This allows a much faster pace of development, and we often don’t have to have separate
phases of the projects one for functional development on a subset of the data and then others for full volume testing.
Likewise we can most likely do away with most batch processes and instead streamline our processes so that they become interactive, thus enhancing the
productivity and responsiveness of those who work with the system, both developers and users.
So this is just an aide-memoire of things to reconsider, things that used to be complex and scary but which now are relatively straightforward. We’ll explore
some of these in more detail now.
© 2016 SAP SE or an SAP affiliate company. All rights reserved. 81Public
Speed allows elimination of aggregates, indexes
this alone can result in a 90%+ reduction in complexity
Thus, importantly, one of the ways in which we choose to spend this speed
increase is not in simply making things faster. Rather, we take that speed and
use it eliminate complexity. When we can do our processing so fast you no
longer need indexes and aggregates and all the complexity that comes with
them.
In fact, for most types of processing we can not only eliminate all this extra
complexity and baggage from our systems but still end up with a system that is
much faster than our old ones!
This is the goal of HANA, not simply making things faster, but radically
simplifying everything we do, this represents a fundamentally new way of
doing things, a step change in information systems, just like the mainframe
was, Client / Server, the PC, the Internet.
Remember, what we are aiming to do, is to improve productivity, agility and
radically reduce cost of ownership.
When we look at the above we can see lots of reasons for this. Users get
instant or near instant results, even for complex processing. Developers can
deliver new or changed function to them sooner so they can accrue more
benefits, they can do this since the whole system is much simpler to develop in
– there is less to develop and what is left is simpler to develop with.
If we need to respond to new opportunities and threats we can do much more
simply too, for the same reasons, simpler systems are more agile to work with.
Obviously we have radically simplified the IT landscape so we save costs on
hardware, data centre costs and administration. Aside from being on
commodity Intel systems we’ll expect to need smaller systems and fewer of
them. Note that they are only smaller in number and size, their compute power
may be many times that of the original systems.
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?
Why SAP HANA?

More Related Content

What's hot

SAP Document Management System Integration with Content Servers
SAP Document Management System Integration with Content Servers SAP Document Management System Integration with Content Servers
SAP Document Management System Integration with Content Servers Verbella CMG
 
SAP HANA Training - For Technical/BASIS administrators.
SAP HANA Training - For Technical/BASIS administrators. SAP HANA Training - For Technical/BASIS administrators.
SAP HANA Training - For Technical/BASIS administrators. Gaganpreet Singh
 
Sap s4 hana logistics ppt
Sap s4 hana logistics pptSap s4 hana logistics ppt
Sap s4 hana logistics pptRamaCharitha1
 
Extend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processesExtend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processesSAP Technology
 
SAP overview.pptx
SAP overview.pptxSAP overview.pptx
SAP overview.pptxasgharhaghi
 
Take the Next Step to S/4HANA with "RISE with SAP"
Take the Next Step to S/4HANA with "RISE with SAP"Take the Next Step to S/4HANA with "RISE with SAP"
Take the Next Step to S/4HANA with "RISE with SAP"panayaofficial
 
Benefit SAP S4HANA.pptx
Benefit SAP S4HANA.pptxBenefit SAP S4HANA.pptx
Benefit SAP S4HANA.pptxAlexYuniarto1
 
S/4 HANA presentation at INDUS
S/4 HANA presentation at INDUSS/4 HANA presentation at INDUS
S/4 HANA presentation at INDUSINDUSCommunity
 
SAP HANA Migration Deck.pptx
SAP HANA Migration Deck.pptxSAP HANA Migration Deck.pptx
SAP HANA Migration Deck.pptxSingbBablu
 
Sap bw 4 hana vs sap bw on hana
Sap bw 4 hana vs sap bw on hanaSap bw 4 hana vs sap bw on hana
Sap bw 4 hana vs sap bw on hanaJasbir Khanuja
 
SAP S/4HANA: Everything you need to know for a successul implementation
SAP S/4HANA: Everything you need to know for a successul implementationSAP S/4HANA: Everything you need to know for a successul implementation
SAP S/4HANA: Everything you need to know for a successul implementationBluefin Solutions
 
SAP S4HANA : Learn From Our Implementation Journey
SAP S4HANA : Learn From Our Implementation JourneySAP S4HANA : Learn From Our Implementation Journey
SAP S4HANA : Learn From Our Implementation JourneyAnup Lakra
 
SAP HANA Interview questions
SAP HANA Interview questionsSAP HANA Interview questions
SAP HANA Interview questionsIT LearnMore
 
SAP HANA for SAP Overview
SAP HANA for SAP OverviewSAP HANA for SAP Overview
SAP HANA for SAP OverviewIliya Ruvinsky
 
SAP Integration Suite L1
SAP Integration Suite L1SAP Integration Suite L1
SAP Integration Suite L1SAP Technology
 
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二Insight Technology, Inc.
 
SAP Activate Methodology for S/4HANA Implementation
SAP Activate Methodology for S/4HANA ImplementationSAP Activate Methodology for S/4HANA Implementation
SAP Activate Methodology for S/4HANA ImplementationKellton Tech Solutions Ltd
 
Transition to SAP S/4HANA System Conversion: A step-by-step guide
Transition to SAP S/4HANA System Conversion: A step-by-step guide Transition to SAP S/4HANA System Conversion: A step-by-step guide
Transition to SAP S/4HANA System Conversion: A step-by-step guide Kellton Tech Solutions Ltd
 

What's hot (20)

SAP Document Management System Integration with Content Servers
SAP Document Management System Integration with Content Servers SAP Document Management System Integration with Content Servers
SAP Document Management System Integration with Content Servers
 
SAP HANA Training - For Technical/BASIS administrators.
SAP HANA Training - For Technical/BASIS administrators. SAP HANA Training - For Technical/BASIS administrators.
SAP HANA Training - For Technical/BASIS administrators.
 
Sap s4 hana logistics ppt
Sap s4 hana logistics pptSap s4 hana logistics ppt
Sap s4 hana logistics ppt
 
Extend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processesExtend SAP S/4HANA to deliver real-time intelligent processes
Extend SAP S/4HANA to deliver real-time intelligent processes
 
SAP overview.pptx
SAP overview.pptxSAP overview.pptx
SAP overview.pptx
 
Take the Next Step to S/4HANA with "RISE with SAP"
Take the Next Step to S/4HANA with "RISE with SAP"Take the Next Step to S/4HANA with "RISE with SAP"
Take the Next Step to S/4HANA with "RISE with SAP"
 
Benefit SAP S4HANA.pptx
Benefit SAP S4HANA.pptxBenefit SAP S4HANA.pptx
Benefit SAP S4HANA.pptx
 
S/4 HANA presentation at INDUS
S/4 HANA presentation at INDUSS/4 HANA presentation at INDUS
S/4 HANA presentation at INDUS
 
SAP HANA Migration Deck.pptx
SAP HANA Migration Deck.pptxSAP HANA Migration Deck.pptx
SAP HANA Migration Deck.pptx
 
Sap bw 4 hana vs sap bw on hana
Sap bw 4 hana vs sap bw on hanaSap bw 4 hana vs sap bw on hana
Sap bw 4 hana vs sap bw on hana
 
SAP S/4HANA: Everything you need to know for a successul implementation
SAP S/4HANA: Everything you need to know for a successul implementationSAP S/4HANA: Everything you need to know for a successul implementation
SAP S/4HANA: Everything you need to know for a successul implementation
 
SAP S4HANA : Learn From Our Implementation Journey
SAP S4HANA : Learn From Our Implementation JourneySAP S4HANA : Learn From Our Implementation Journey
SAP S4HANA : Learn From Our Implementation Journey
 
SAP HANA Interview questions
SAP HANA Interview questionsSAP HANA Interview questions
SAP HANA Interview questions
 
SAP HANA for SAP Overview
SAP HANA for SAP OverviewSAP HANA for SAP Overview
SAP HANA for SAP Overview
 
SAP Integration Suite L1
SAP Integration Suite L1SAP Integration Suite L1
SAP Integration Suite L1
 
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二
20170518_今さら聞けないHANAのハナシの基本のき by SAPジャパン株式会社 新久保浩二
 
Moving to SAP S/4HANA
Moving to SAP S/4HANAMoving to SAP S/4HANA
Moving to SAP S/4HANA
 
SAP Activate Methodology for S/4HANA Implementation
SAP Activate Methodology for S/4HANA ImplementationSAP Activate Methodology for S/4HANA Implementation
SAP Activate Methodology for S/4HANA Implementation
 
SAP ECC to S/4HANA Move
SAP ECC to S/4HANA MoveSAP ECC to S/4HANA Move
SAP ECC to S/4HANA Move
 
Transition to SAP S/4HANA System Conversion: A step-by-step guide
Transition to SAP S/4HANA System Conversion: A step-by-step guide Transition to SAP S/4HANA System Conversion: A step-by-step guide
Transition to SAP S/4HANA System Conversion: A step-by-step guide
 

Viewers also liked

Introduction to HANA in-memory from SAP
Introduction to HANA in-memory from SAPIntroduction to HANA in-memory from SAP
Introduction to HANA in-memory from SAPugur candan
 
Clone skills, inc sap hana & bobi 4.0 coursework - case study - v1
Clone skills, inc   sap hana & bobi 4.0 coursework - case study - v1Clone skills, inc   sap hana & bobi 4.0 coursework - case study - v1
Clone skills, inc sap hana & bobi 4.0 coursework - case study - v1Jothi Periasamy
 
Enterprise data science - What it takes to build?
Enterprise data science - What it takes to build?Enterprise data science - What it takes to build?
Enterprise data science - What it takes to build?Jothi Periasamy
 
Innovating to Real-Time using SAP BusinessObjects & SAP HANA
Innovating to Real-Time using SAP BusinessObjects & SAP HANAInnovating to Real-Time using SAP BusinessObjects & SAP HANA
Innovating to Real-Time using SAP BusinessObjects & SAP HANAKurt J. Bilafer
 
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...Jothi Periasamy
 
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & Migration
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & MigrationSAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & Migration
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & MigrationJothi Periasamy
 
SAP HANA Native Application Development
SAP HANA Native Application DevelopmentSAP HANA Native Application Development
SAP HANA Native Application DevelopmentZaranTech LLC
 
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)Jothi Periasamy
 
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...Jothi Periasamy
 
Enterprise Cloud Computing - Analytics, Planning & Digital Boardroom
Enterprise Cloud Computing  - Analytics, Planning & Digital Boardroom  Enterprise Cloud Computing  - Analytics, Planning & Digital Boardroom
Enterprise Cloud Computing - Analytics, Planning & Digital Boardroom Jothi Periasamy
 
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015Big Data Week Kuala Lumpur 2015 - 21-22 April 2015
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015Jothi Periasamy
 
New BI Tools with HANA
New BI Tools with HANANew BI Tools with HANA
New BI Tools with HANAtasmc
 
In-Memory Analytics - SAP Big Data - Analytics Tools Selection - SAP HANA & ...
In-Memory Analytics - SAP Big Data - Analytics Tools Selection  - SAP HANA & ...In-Memory Analytics - SAP Big Data - Analytics Tools Selection  - SAP HANA & ...
In-Memory Analytics - SAP Big Data - Analytics Tools Selection - SAP HANA & ...Jothi Periasamy
 
Financial Terminology for SAP BPC and General
Financial Terminology for SAP BPC and General  Financial Terminology for SAP BPC and General
Financial Terminology for SAP BPC and General Jothi Periasamy
 
Enterprise Data Science Use Cases
Enterprise Data Science Use CasesEnterprise Data Science Use Cases
Enterprise Data Science Use CasesJothi Periasamy
 
SAP BPC Consolidation (end to-end) Solutions Demo
SAP BPC Consolidation (end to-end) Solutions Demo SAP BPC Consolidation (end to-end) Solutions Demo
SAP BPC Consolidation (end to-end) Solutions Demo Jothi Periasamy
 
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...Jothi Periasamy
 

Viewers also liked (17)

Introduction to HANA in-memory from SAP
Introduction to HANA in-memory from SAPIntroduction to HANA in-memory from SAP
Introduction to HANA in-memory from SAP
 
Clone skills, inc sap hana & bobi 4.0 coursework - case study - v1
Clone skills, inc   sap hana & bobi 4.0 coursework - case study - v1Clone skills, inc   sap hana & bobi 4.0 coursework - case study - v1
Clone skills, inc sap hana & bobi 4.0 coursework - case study - v1
 
Enterprise data science - What it takes to build?
Enterprise data science - What it takes to build?Enterprise data science - What it takes to build?
Enterprise data science - What it takes to build?
 
Innovating to Real-Time using SAP BusinessObjects & SAP HANA
Innovating to Real-Time using SAP BusinessObjects & SAP HANAInnovating to Real-Time using SAP BusinessObjects & SAP HANA
Innovating to Real-Time using SAP BusinessObjects & SAP HANA
 
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...
Analyzing Financial Statements (FS) Through SAP BOBI 4.0 - a “SAP HANA” enabl...
 
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & Migration
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & MigrationSAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & Migration
SAP s/4 HANA - sFIN Accelerated Implementation, Upgrade & Migration
 
SAP HANA Native Application Development
SAP HANA Native Application DevelopmentSAP HANA Native Application Development
SAP HANA Native Application Development
 
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)
SAP HANA 1.0 Solutions Overview (A Practical Approach for Utility and CPG)
 
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...
Financial Planning/Budgeting Review and Approval Process in SAP BPC NW 7.5 - ...
 
Enterprise Cloud Computing - Analytics, Planning & Digital Boardroom
Enterprise Cloud Computing  - Analytics, Planning & Digital Boardroom  Enterprise Cloud Computing  - Analytics, Planning & Digital Boardroom
Enterprise Cloud Computing - Analytics, Planning & Digital Boardroom
 
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015Big Data Week Kuala Lumpur 2015 - 21-22 April 2015
Big Data Week Kuala Lumpur 2015 - 21-22 April 2015
 
New BI Tools with HANA
New BI Tools with HANANew BI Tools with HANA
New BI Tools with HANA
 
In-Memory Analytics - SAP Big Data - Analytics Tools Selection - SAP HANA & ...
In-Memory Analytics - SAP Big Data - Analytics Tools Selection  - SAP HANA & ...In-Memory Analytics - SAP Big Data - Analytics Tools Selection  - SAP HANA & ...
In-Memory Analytics - SAP Big Data - Analytics Tools Selection - SAP HANA & ...
 
Financial Terminology for SAP BPC and General
Financial Terminology for SAP BPC and General  Financial Terminology for SAP BPC and General
Financial Terminology for SAP BPC and General
 
Enterprise Data Science Use Cases
Enterprise Data Science Use CasesEnterprise Data Science Use Cases
Enterprise Data Science Use Cases
 
SAP BPC Consolidation (end to-end) Solutions Demo
SAP BPC Consolidation (end to-end) Solutions Demo SAP BPC Consolidation (end to-end) Solutions Demo
SAP BPC Consolidation (end to-end) Solutions Demo
 
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...
SAP S/4 HANA - SAP sFIN (Simple Finance) - Financial Reporting and Advanced A...
 

Similar to Why SAP HANA?

SAP HANA: Enterprise Data Management Meets High Performance Enterprise Computing
SAP HANA: Enterprise Data Management Meets High Performance Enterprise ComputingSAP HANA: Enterprise Data Management Meets High Performance Enterprise Computing
SAP HANA: Enterprise Data Management Meets High Performance Enterprise Computingimcpune
 
What is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdfWhat is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdfankeetkumar4
 
Empowering SAP HANA Customers and Use Cases
Empowering SAP HANA Customers and Use CasesEmpowering SAP HANA Customers and Use Cases
Empowering SAP HANA Customers and Use CasesthinkASG
 
SAP HANA SPS10- Multitenant Database Containers
SAP HANA SPS10- Multitenant Database ContainersSAP HANA SPS10- Multitenant Database Containers
SAP HANA SPS10- Multitenant Database ContainersSAP Technology
 
Disaster Recovery for SAP HANA with SUSE Linux
Disaster Recovery for SAP HANA with SUSE LinuxDisaster Recovery for SAP HANA with SUSE Linux
Disaster Recovery for SAP HANA with SUSE LinuxDirk Oppenkowski
 
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...Codemotion
 
Reduce TCO with SAP Business Suite powered by SAP HANA
Reduce TCO with SAP Business Suite powered by SAP HANAReduce TCO with SAP Business Suite powered by SAP HANA
Reduce TCO with SAP Business Suite powered by SAP HANAVolker Haentjes
 
IT Simplification with the SAP HANA platform
IT Simplification with the SAP HANA platformIT Simplification with the SAP HANA platform
IT Simplification with the SAP HANA platformVolker Haentjes
 
S/4hana Business Audience
S/4hana Business AudienceS/4hana Business Audience
S/4hana Business Audiencepaulohwisneski
 
00- SAP-BASIS-EPSS-EN.pptx
00- SAP-BASIS-EPSS-EN.pptx00- SAP-BASIS-EPSS-EN.pptx
00- SAP-BASIS-EPSS-EN.pptxAhmedSeid38
 
Sap ac100 col03 sf 1503 latest sample www erp_examscom
Sap ac100 col03 sf 1503 latest sample www erp_examscomSap ac100 col03 sf 1503 latest sample www erp_examscom
Sap ac100 col03 sf 1503 latest sample www erp_examscomSap Materials
 
SAP HANA Developer Access Beta program - 7 steps towards your first HANA report
SAP HANA Developer Access Beta program - 7 steps towards your first HANA reportSAP HANA Developer Access Beta program - 7 steps towards your first HANA report
SAP HANA Developer Access Beta program - 7 steps towards your first HANA reportRonald Konijnenburg
 
HANA Demystified by DataMagnum
HANA Demystified by DataMagnumHANA Demystified by DataMagnum
HANA Demystified by DataMagnumPrasad Mavuduri
 
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutionsBobby Shah
 
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdf
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdfThe Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdf
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdfingenxtec
 
CIO Guide to Using SAP HANA Platform For Big Data
CIO Guide to Using SAP HANA Platform For Big DataCIO Guide to Using SAP HANA Platform For Big Data
CIO Guide to Using SAP HANA Platform For Big DataSnehanshu Shah
 
Sap hana by jeff_word
Sap hana by jeff_wordSap hana by jeff_word
Sap hana by jeff_wordSunil Joshi
 
Sizing sap s 4 hana using the quick sizer tool
Sizing sap s 4 hana using the quick sizer toolSizing sap s 4 hana using the quick sizer tool
Sizing sap s 4 hana using the quick sizer toolJaleel Ahmed Gulammohiddin
 

Similar to Why SAP HANA? (20)

SAP HANA: Enterprise Data Management Meets High Performance Enterprise Computing
SAP HANA: Enterprise Data Management Meets High Performance Enterprise ComputingSAP HANA: Enterprise Data Management Meets High Performance Enterprise Computing
SAP HANA: Enterprise Data Management Meets High Performance Enterprise Computing
 
What is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdfWhat is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdf
 
Empowering SAP HANA Customers and Use Cases
Empowering SAP HANA Customers and Use CasesEmpowering SAP HANA Customers and Use Cases
Empowering SAP HANA Customers and Use Cases
 
SAP HANA SPS10- Multitenant Database Containers
SAP HANA SPS10- Multitenant Database ContainersSAP HANA SPS10- Multitenant Database Containers
SAP HANA SPS10- Multitenant Database Containers
 
Disaster Recovery for SAP HANA with SUSE Linux
Disaster Recovery for SAP HANA with SUSE LinuxDisaster Recovery for SAP HANA with SUSE Linux
Disaster Recovery for SAP HANA with SUSE Linux
 
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...
The SAP Startup Focus Program – Tackling Big Data With the Power of Small by ...
 
Reduce TCO with SAP Business Suite powered by SAP HANA
Reduce TCO with SAP Business Suite powered by SAP HANAReduce TCO with SAP Business Suite powered by SAP HANA
Reduce TCO with SAP Business Suite powered by SAP HANA
 
IT Simplification with the SAP HANA platform
IT Simplification with the SAP HANA platformIT Simplification with the SAP HANA platform
IT Simplification with the SAP HANA platform
 
Autodesk Technical Webinar: SAP HANA in-memory database
Autodesk Technical Webinar: SAP HANA in-memory databaseAutodesk Technical Webinar: SAP HANA in-memory database
Autodesk Technical Webinar: SAP HANA in-memory database
 
S/4hana Business Audience
S/4hana Business AudienceS/4hana Business Audience
S/4hana Business Audience
 
00- SAP-BASIS-EPSS-EN.pptx
00- SAP-BASIS-EPSS-EN.pptx00- SAP-BASIS-EPSS-EN.pptx
00- SAP-BASIS-EPSS-EN.pptx
 
Sap ac100 col03 sf 1503 latest sample www erp_examscom
Sap ac100 col03 sf 1503 latest sample www erp_examscomSap ac100 col03 sf 1503 latest sample www erp_examscom
Sap ac100 col03 sf 1503 latest sample www erp_examscom
 
S4 HANA Launch MENA
S4 HANA Launch MENAS4 HANA Launch MENA
S4 HANA Launch MENA
 
SAP HANA Developer Access Beta program - 7 steps towards your first HANA report
SAP HANA Developer Access Beta program - 7 steps towards your first HANA reportSAP HANA Developer Access Beta program - 7 steps towards your first HANA report
SAP HANA Developer Access Beta program - 7 steps towards your first HANA report
 
HANA Demystified by DataMagnum
HANA Demystified by DataMagnumHANA Demystified by DataMagnum
HANA Demystified by DataMagnum
 
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
 
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdf
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdfThe Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdf
The Essential Guide to SAP Cloud, Data Migration, ABAP, and Reporting.pdf
 
CIO Guide to Using SAP HANA Platform For Big Data
CIO Guide to Using SAP HANA Platform For Big DataCIO Guide to Using SAP HANA Platform For Big Data
CIO Guide to Using SAP HANA Platform For Big Data
 
Sap hana by jeff_word
Sap hana by jeff_wordSap hana by jeff_word
Sap hana by jeff_word
 
Sizing sap s 4 hana using the quick sizer tool
Sizing sap s 4 hana using the quick sizer toolSizing sap s 4 hana using the quick sizer tool
Sizing sap s 4 hana using the quick sizer tool
 

More from SAP Technology

Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...SAP Technology
 
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...SAP Technology
 
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...SAP Technology
 
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology Platform
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology PlatformAccelerate your journey to SAP S/4HANA with SAP’s Business Technology Platform
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology PlatformSAP Technology
 
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...SAP Technology
 
Transform your business with intelligent insights and SAP S/4HANA
Transform your business with intelligent insights and SAP S/4HANATransform your business with intelligent insights and SAP S/4HANA
Transform your business with intelligent insights and SAP S/4HANASAP Technology
 
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...SAP Technology
 
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...SAP Technology
 
The IoT Imperative for Consumer Products
The IoT Imperative for Consumer ProductsThe IoT Imperative for Consumer Products
The IoT Imperative for Consumer ProductsSAP Technology
 
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...SAP Technology
 
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...SAP Technology
 
The IoT Imperative in Government and Healthcare
The IoT Imperative in Government and HealthcareThe IoT Imperative in Government and Healthcare
The IoT Imperative in Government and HealthcareSAP Technology
 
SAP S/4HANA Finance and the Digital Core
SAP S/4HANA Finance and the Digital CoreSAP S/4HANA Finance and the Digital Core
SAP S/4HANA Finance and the Digital CoreSAP Technology
 
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANA
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANAFive Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANA
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANASAP Technology
 
SAP Helps Reduce Silos Between Business and Spatial Data
SAP Helps Reduce Silos Between Business and Spatial DataSAP Helps Reduce Silos Between Business and Spatial Data
SAP Helps Reduce Silos Between Business and Spatial DataSAP Technology
 
Spotlight on Financial Services with Calypso and SAP ASE
Spotlight on Financial Services with Calypso and SAP ASESpotlight on Financial Services with Calypso and SAP ASE
Spotlight on Financial Services with Calypso and SAP ASESAP Technology
 
SAP ASE 16 SP02 Performance Features
SAP ASE 16 SP02 Performance FeaturesSAP ASE 16 SP02 Performance Features
SAP ASE 16 SP02 Performance FeaturesSAP Technology
 
Spark Usage in Enterprise Business Operations
Spark Usage in Enterprise Business OperationsSpark Usage in Enterprise Business Operations
Spark Usage in Enterprise Business OperationsSAP Technology
 
What's New in SAP HANA SPS 11 Operations
What's New in SAP HANA SPS 11 OperationsWhat's New in SAP HANA SPS 11 Operations
What's New in SAP HANA SPS 11 OperationsSAP Technology
 
What's New in SAP HANA SPS 11 Application Lifecycle Management
What's New in SAP HANA SPS 11 Application Lifecycle ManagementWhat's New in SAP HANA SPS 11 Application Lifecycle Management
What's New in SAP HANA SPS 11 Application Lifecycle ManagementSAP Technology
 

More from SAP Technology (20)

Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...
 
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...
 
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...
 
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology Platform
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology PlatformAccelerate your journey to SAP S/4HANA with SAP’s Business Technology Platform
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology Platform
 
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...
 
Transform your business with intelligent insights and SAP S/4HANA
Transform your business with intelligent insights and SAP S/4HANATransform your business with intelligent insights and SAP S/4HANA
Transform your business with intelligent insights and SAP S/4HANA
 
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...
 
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...
 
The IoT Imperative for Consumer Products
The IoT Imperative for Consumer ProductsThe IoT Imperative for Consumer Products
The IoT Imperative for Consumer Products
 
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...
 
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...
 
The IoT Imperative in Government and Healthcare
The IoT Imperative in Government and HealthcareThe IoT Imperative in Government and Healthcare
The IoT Imperative in Government and Healthcare
 
SAP S/4HANA Finance and the Digital Core
SAP S/4HANA Finance and the Digital CoreSAP S/4HANA Finance and the Digital Core
SAP S/4HANA Finance and the Digital Core
 
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANA
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANAFive Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANA
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANA
 
SAP Helps Reduce Silos Between Business and Spatial Data
SAP Helps Reduce Silos Between Business and Spatial DataSAP Helps Reduce Silos Between Business and Spatial Data
SAP Helps Reduce Silos Between Business and Spatial Data
 
Spotlight on Financial Services with Calypso and SAP ASE
Spotlight on Financial Services with Calypso and SAP ASESpotlight on Financial Services with Calypso and SAP ASE
Spotlight on Financial Services with Calypso and SAP ASE
 
SAP ASE 16 SP02 Performance Features
SAP ASE 16 SP02 Performance FeaturesSAP ASE 16 SP02 Performance Features
SAP ASE 16 SP02 Performance Features
 
Spark Usage in Enterprise Business Operations
Spark Usage in Enterprise Business OperationsSpark Usage in Enterprise Business Operations
Spark Usage in Enterprise Business Operations
 
What's New in SAP HANA SPS 11 Operations
What's New in SAP HANA SPS 11 OperationsWhat's New in SAP HANA SPS 11 Operations
What's New in SAP HANA SPS 11 Operations
 
What's New in SAP HANA SPS 11 Application Lifecycle Management
What's New in SAP HANA SPS 11 Application Lifecycle ManagementWhat's New in SAP HANA SPS 11 Application Lifecycle Management
What's New in SAP HANA SPS 11 Application Lifecycle Management
 

Recently uploaded

Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAroojKhan71
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...shambhavirathore45
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Researchmichael115558
 
Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxolyaivanovalion
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxolyaivanovalion
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxJohnnyPlasten
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Delhi Call girls
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...SUHANI PANDEY
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysismanisha194592
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxolyaivanovalion
 
Week-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionWeek-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionfulawalesam
 

Recently uploaded (20)

Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...Determinants of health, dimensions of health, positive health and spectrum of...
Determinants of health, dimensions of health, positive health and spectrum of...
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptx
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptx
 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptx
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
Week-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionWeek-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interaction
 

Why SAP HANA?

  • 1. HANA ‘The Why’ Henry Cook, SAP HANA Global Centre of Excellence January 2016 https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
  • 2. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 2Public Special Note: This slide deck is provided for those wishing to gain a copy of the slides to the “Why HANA” presentation published on YouTube and as a Blog. The best way to consume this presentation is to first watch it being presented, then to use these slides as reminders, or as supporting material for your own meetings The video can be reached through the following two links. The first is a blog which provides context, an introduction, and a link to the video. The second link goes directly to the video. https://blogs.saphana.com/2016/03/11/hana-the-why/ https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be Once downloaded, the first part of this SlideShare (Slides 1-46) can be viewed or used just as they appear in the video itself. The second part of the SlideShare (Slides 48-92) provides speaker notes for all the slides. This can be used to revise or clarify particular topics within the presentation We hope that you find this useful in progressing along your own HANA journey!
  • 3. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 3Public R/2 1979 Mainframe R/3 1992 Client Server ERP 2004 Web SOA 2015 IoT API’s Mobile A logical evolution each being necessary to provide significant new business capability and escape the constraints of the past
  • 4. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 4Public Our Vision help the world run better and improve people’s liveslives Our Passions teamwork, integrity, accountability, professionalism and trust Accelerating business innovation through radical simplification SAP 7-8 years ago  Impeded by complexity,  large, complex suite of applications  15 month release cycles,  Surrounded by increasingly nimble competitors,  Dependent upon others for data management,  Incurring large development costs Key Question: How to get ahead and stay ahead of the market ? Strategic response: Massive Simplification of what we do This simplification is what we now know as HANA Our Mission help organizations become best-run businesses
  • 5. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 5Public Traditional ERP system architecture DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DB DBDB DB DB DB DB
  • 6. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 6Public Transformation through Simplification Effort / Services / Admin Software Hardware Cost Effort / Services / Admin Software Hardware Innovation Resources saved and diverted New Function, Revenue and profit Investment Simplification
  • 7. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 7Public HANA Simplification shows up as: Productivity Users Developers Agility  Faster response, time to market  Easier change TCO  Radical simplification of IT landscape
  • 8. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 8Public How do traditional systems impede benefits? Hard to combine multiple techniques Have to spend time in application integration Hard to combine data sources: Time & cost of using data stores Data Data Data Redundant Pre-aggregated data 60-95% of our data objects, and thus effort (design, tuning, maintenance), are due to these supporting data Qn. Ans. Mins / Hrs / Days Slow response times: Destroys ‘flow’ and productivity. Forbids ‘heavy math’ Qn. SQL Predictive Text Ans Data Data Data Data Data
  • 9. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 9Public The Motivating Idea In-Memory Data Management: An Inflection Point for Enterprise Applications. Hasso Plattner Alexander Zeier ISBN 978-3-642-19362-0 The In-Memory Revolution: How SAP HANA Enables Business of the Future 21 Apr 2015 by Hasso Plattner, Bernd Leukert
  • 10. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 10Public History 2007 – The Business Goals of HANA Source: SAP Suite on HANA announcement: January 10, 2013 Qn: 14 years after R/3, what would an ERP (Enterprise) system look like if we start from scratch? [Hasso Plattner Institute, Potsdam] All Active Data In-Memory Leverage Massively Parallel Computers (Scale with Cores vs. CPU Speed) Use Design Thinking Methodology Radically Simplified Data Model OLTP and OLAP Back Together Instant BI (on Transactional Data) No More Batch Programs Live Conversation (Instead of “Briefing Books”) Response < 1 Sec (For all activities, Even complex Algorithms) Virtual DW for Multiple Data Sources and Types No Aggregates or Materialized Cubes (Dynamic Views) Views on Views (Up to 16 Levels) Mobile, Wherever AppropriateAggressive Use of Math
  • 11. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 11Public HANA Techniques: How to Build a New Kind of Enterprise System MAP REDUCE ACTIVE AND PASSIVE DATA STORE ANALYTICS ON HISTORICAL DATA SINGLE AND MULTI- TENANCY REDUCTION IN LAYERS 3DCALL + T + MULTI-CORE / PARALLELIZATION DYNAMIC MULTI-THREADING VIRTUAL AGGREGATES PARTITIONING MINIMAL PROJECTIONS NO DISK OPERATION INSERT ONLY REAL-TIME REPLICATION ANY ATTRIBUTE AS AN INDEX TEXT ANALYTICS OBJECT TO RELATIONAL MAPPING GROUP KEYS LIGHTWEIGHT COMPRESSION ON THE FLY EXTENSIBILITY SPATIAL TRANSACTIONAL COLUMN STORE SQL INTERFACE ON COLUMNS AND ROWS LIBRARIES FOR STATS & BIZ BEYOND SQL Global development, 2007: Seoul, Shanghai, Ho Chi Minh, Bangalore, Tel Aviv, Berlin, Walldorf, Paris, Toronto, Vancouver, Dublin CA, Palo Alto
  • 12. 12© 2016 SAP SE or an SAP affiliate company. All rights reserved. https://www.youtube.com/watch?v=jB8rnZ-0dKw
  • 13. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 13Public Preconfigured Appliance (or VM or Cloud) ■ In-Memory software + hardware (HP, IBM, Fujitsu, Cisco, Dell, NEC, Huawei, VCE) In-Memory Computing Engine Software ■ Data Modeling and Data Management ■ Real-time Data Replication via Sybase Replication Server ■ Data Services for ETL capabilities from SAP Business Suite, SAP BW and 3rd Party Systems ■ Data Federation: Remote RDBMS, Hadoop … Components ■ Row Store ■ Column Store ■ Calc Engine ■ Graph Engine ■ Application Server (XS server) HANA is designed to be more than just a Database MDX SQL BICSSQL Modeling Studio Real–Time Replication, Federation Data Services ETL / ELT SAP HANA Other Applications SAP BusinessObjects SAP NetWeaver BW SAP Business Suite 3rd Party In-Memory Computing Engine Calculation and Planning Engine Row & Column Storage ■ Predictive Analytics Library ■ ‘R’ Interface ■ SQL Script / Calc Engine Language ■ Text Engine ■ Planning Engine ■ Spatial ■ Business Function Library ■ Persistance (logging, recovery) , ACID transaction integrity
  • 14. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 14Public There has been a revolution in hardware 20 Years Ago Now Near Future Memory 1GB CPU 4 x 50 Mhz X 6,000 X 1,800 Memory 6 TB CPU 120 x 3 GHz Transistors (CPU) ~ 1 million Transistors (CPU) 2.6 Billion Memory 48 TB Cores 480 (8 x 4 x 15) Note: Figures are for single servers
  • 15. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 15Public HANA, works in a fundamentally different way to other systems, this initially looks complex, but is actually easy to understand. Panic Server Blade CPU Chip DRAM Memory Server Blade DRAM Memory CPU Chip CoreCore 500+ns 3Gb ProcessorProcessor L2 Cache L1 Cache … 80+ns 12.8Gb SSD Mechanical Disk CPU Chip … DRAM Memory Source: Intel 200,000 ns 0.5 Gb 10,000,000 ns 0.07 Gb L3 Cache L2 Cache L1 Cache 4ns 1.5+ns 15ns 60+ns Cache 100+ns Data Access Latency Bandwidth 50 Gb
  • 16. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 16Public CPU L1 CACHE, TABLE (1m) L2 CACHE KITCHEN FRIDGE (3m) MAIN MEMORY (RAM) - LOCAL LONDON SHOP (30m) L3 CACHE THE GARAGE (9m) A Useful Analogy
  • 17. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 17Public CPU L1 CACHE, TABLE (1m) L2 CACHE KITCHEN FRIDGE (3m) MAIN MEMORY (RAM) - LOCAL LONDON SHOP (30m) L3 CACHE THE GARAGE (9m) DISK – BREWERY , MILWAUKEE USA (6,000,000 metres) A Useful Analogy
  • 18. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 18Public When you store tabular data you get to store it in one of two ways A 10 € B 35 $ C 2 € D 40 € E 12 $ memory address … or by row A 10 … many columns € B 35 $ C 2 € D 40 € E 12 $ Table of Information … … … … … … by column A B C D E 10 35 2 40 12 … € $ € € $ … … …
  • 19. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 19Public For rows, data is moved to the processor in ‘chunks’ but only a tiny proportion of those ‘chunks’ are useful A 10 € B 35 $ C 2 € D 40 € E 12 $ By Row … … … … … Processor • Data laid out in sequence • Summing numbers • Only a small proportion of data moved can be used • Lots of ‘padding’ • Processor ‘spins its wheels’ 3 Bn ticks / sec fetch data
  • 20. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 20Public With data stored in columns “Every Byte Counts” 9 101 10 35 2 40 12 53 44 … CPU Chip Processor L3 Cache L2 Cache L1 Cache 3 Bn ticks / sec • Column is ‘dropped’ into on-chip cache memory • Caches kept filled – no ‘padding’ • Processor doesn’t have to wait • Pick up multiple data items each time you fetch data • Hold data compressed • Compute on compressed data • Amazingly more efficient • 100,000x speed improvements
  • 21. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 21Public Columnar data store benefits  Highly dense data structures  Multiple values for computation available at once  Fully exploit modern CPU architectures Can be joined with row-based data Traditional Column Store Benefits still apply  Compresses nicely  Easy to add new columns non-disruptively – productivity  Reduces data being processed to just those columns accessed Get full benefit by using them for everything  Transaction processing  Text  Spatial  Predictive, etc. Columnar data stores offer huge benefits, if you can use them in a general purpose way Table - by column A B C D E 10 35 2 40 12 … € $ € € $ … … …
  • 22. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 22Public All those extra transistors can be used in a number of ways to speed up computation and throughput – by 100,000x or more CPU DRAM Logs Persistence L3 Cache CPU Chip 3.4 GHz • More processor cores per chip • Fast memory to keep cores fed L2 Cache L1 Cache Register Core • Give each Core its own fast cache • Computations in registers seldom wait • Thus make full use of fast registers nnnn Register(s) • Used to be1 = 1 data value processed • Make registers bigger, e.g. 256 bits instead of 32 • Add circuitry to let them process multiple values at a time – “vector instructions” • Add more registers!
  • 23. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 23Public HANA Memory Data Flow L3 Cache L2 Cache L1 Cache Register nnnn CPU Core CPU Chip 3.4 GHz DRAM Logs Persistence • Parallelism - Multi server / CPU / Core • Vector / SIMD – multiple values / intr • Scanning 5 Bn/core/s • Aggregation 12m / core / s • 600 Bn scans / single server / sec • ALL Operations • RDBMS • Transaction Update • Text • Predictive • Spatial • … • Enables: • Simplicity • Productivity Register(s)
  • 24. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 24Public What SAP means by “in-memory” can be very different to what others may mean L3 Cache L2 Cache L1 Cache Register nnnn CPU Core CPU Chip 3.4 GHz DRAM Logs /Persistence • HANA uses these memories here to feed vector instructions, 100,000x performance advantage. • HANA does this for ALL its processing. • These memories, and the way they are used are completely different things to DRAM • To do this you need to start from scratch. • Algorithms use these new parts of processors • Others systems simply use RAM memory to reduce disk I/O • Traditional use • Doesn’t get you to 100,000x • Other uses limited (e.g. no OLTP) Register(s)
  • 25. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 25Public Traditional table design in disc-based RDBMS – many tables Complex assemblies of tables complicate BI development and change …. Application-built secondary Index tables Application-built aggregate tables DBA-built aggregate tables ABAP VDM / SQL DBA-built secondary Index tables Main Master Record Audit / Subset Records
  • 26. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 26Public S/4HANA design concept makes use of the much simpler and elegant columnar data method …. Validity Vector Individual attributes each held in their own separate column store No indexes, each column is its own index A new value is inserted, the old one is not overwritten but kept, marking it with a timestamp / validity marker. One byte is written, one marked as ‘no longer current’ No Aggregates. We have the ability to dynamically produce aggregations ‘on the fly’ No complex audit tables. Audits can be reconstructed as and when needed from previous values
  • 27. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 27Public S/4HANA Example of dramatic simplification: Logistics - Materials Management MSTQH MSTEH MSTBH MSSQH MSSAH MSPRHMSLBH MSKUH MSKAH MKOLH MCHBH MARDH MARCHMSTQ MSTE MSTB MSSQ MSSA MSPR MSLB MSKU MSKA MKOLMCHBMKPF MSEG MARD MARC SAP Logistics table assembly and auxiliary aggregates and indices 28 tables not counting change log tables MSEG New SAP sLog 1 table MSEG New BEFORE HANA AFTER Conversion to HANA
  • 28. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 28Public Example of massive simplification: SAP Accounting powered by SAP HANA Logistics Document CO Document FI Document Totals & Indices Financial Accounting Totals & Indices Management Account. From … Stability Processing Analytics CO Document FI Document Flexibility Aggregation on the fly via HANA views Pre-Defined Aggregates Flexibility Stability Processing Analytics Logical document Logistics Document … To Customer Benefits • Harmonized internal and external reporting • Significantly reduced reconciliation effort • Significantly reduced memory consumption • Higher flexibility in reporting and analysis • Central journal for heterogeneous system landscapes
  • 29. 29© 2016 SAP SE or an SAP affiliate company. All rights reserved.1 block = 10 data objects
  • 30. 30© 2016 SAP SE or an SAP affiliate company. All rights reserved.1 block = 10 data objects
  • 31. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 31Public
  • 32. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 32Public SAP HANA, the great simplifier of enterprise software SAP HANA SAP Business Warehouse powered by SAP HANA SAP Business Suite powered by SAP HANA SAP Simple Finance powered by SAP HANA  Real-time analysis  Real-time reporting  Real-time business  OLAP and OLTP together  SAP HANA Enterprise Cloud for SAP Business Suite on SAP HANA  In-memory platform  Instant financial insight  No aggregates  Single source of truth  Simplified data model  New user experience  Advanced processing  Choice of deployment 2010/11 20132012 2014 2015
  • 33. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 33Public Some facts about SAP S/4HANA 10x smaller data footprint 4x less process steps 1800x faster analytics & reporting 7x higher throughput 1. Built on SAP HANA 2. ERP, CRM, SRM,SCM, PLM in one system 3. No locking, parallelism 4. Actual data (25%) and historical (75%) 5. Unlimited workload capacity 6. Predict, recommend, simulate 7. SAP HANA Cloud Platform extensions 8. SAP HANA multi-tenancy 9. All data: social, text, geo, graph processing 10. New SAP Fiori UX for any device (mobile, desktop, tablet) Three deployment options: on-premise, public cloud, managed cloud
  • 34. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 34Public Revising our Assumptions about System Design Things that are now much easier with HANA • Simultaneous Real time update (Xm tx/s) and complex analysis on single copy of the data • Doing BI directly on an operational system data in real time • Developing using a much simpler data model – the logical data model • Using sophisticated math for forecasting, predicting and simulation, with fast response • Being able to make changes ‘on the fly’ rather than ‘N week mini-projects’ • Faster changes to simpler data models, metadata rather than physical data changes • Interactive examination of data, even in production volumes • Fast prototyping using production scale volumes • What were batch processes become interactive …
  • 35. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 35Public Speed allows elimination of aggregates, indexes this alone can result in a 90%+ reduction in complexity Operational Data Store Data Warehouse Indexes Aggregates Copy ETL Operational Data Store Data Calculation Engine Query Results Query SAP HANA Query Results Query Copy Data Optional
  • 36. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 36Public On The Fly Transformation provides greatly increased flexibility for sharing data Source: In-Memory Data Management, Hasso Plattner/Alexander Zeier Persistence Layer (Main Memory) View Layer Presentation Layer Spread Sheet Business Transaction Any Software Analytical Application View View View View Log View Layer Concept View View View
  • 37. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 37Public Project Efficiency ~60% reduction, tasks shrink or are eliminated Mth 2 Mth 3 Mth 4 Mth 5 Mth 6 Mth 7Mth 1 ~7 month project 6 Weeks Define 4 weeks Develop 2 days data 3 weeks Test 3-4 weeks Rework 2 weeks Tune 2 weeks Backload 4 weeks Volume Test 2-3 weeks Report 2 weeks Implement Future Mode of Operation (FMO) SAP HANA (column-store in-memory DB) 4 weeks Define 4 weeks Develop/Test/Rework unlimited data! 1 day Tune! 2 weeks Report D’ment & Volume Test 1-2 weeks Implement 1-3 days Backload • Replicate rather than ETL • Avoid physical model (4- 6 layers/ & transformations) • Single Modelling Tool (Power Designer) • Less Development  Activate replication rather than ETL  No physical layers • Less Testing  Replication easier to test  Fewer transformations • Faster, Iterative test/fix/test • Model-driven development • No index (re)-build • No need to change physical data model (e.g. aggregations) • No embedded calculation • Only need to set parameters • Higher self-service/analysis means less reports to build No need to renew sematic layer • Virtual Model  Easily transported  Faster reload (no intermediate physical layers, in-memory) ~3 months project • Virtual Model • Replication or ETL can go 50X faster (e.g. BW PoC) Example: “We took an analytic that took us 6 months to develop and we redeployed it on HANA in two weeks. Results come back so quickly now, we don’t have time to get coffee” Justin Replogle, Director – IT, Honeywell Jul Aug SepJun Current Mode of Operation (CMO) Traditional RDBMS
  • 38. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 38Public Simplified Stack, fewer parts, easier development 1. Fewer servers and storage 2. Less layers of software 3. Simpler Administration 4. Lower BI run cost 5. Faster time to deliver BI projects 6. Productivity - Tools ‘at your fingertips’ 7. Reduced ‘Shadow IT’ costs The HANA architecture lends itself well to an initial TCO impact analysis. Based on preliminary analysis with customers, we established that the overall TCO impact of the roadmap will be beneficial to operating costs. (i.e. excluding the substantial business benefits from in-memory technology).
  • 39. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 39Public The SAP HANA Platform A complete platform. Development, Administration, All styles of processing and data
  • 40. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 40Public S/4HANA the primary reason for HANA and the culmination of a five year release process SAP HANA SAP Business Warehouse powered by SAP HANA SAP Business Suite powered by SAP HANA SAP Simple Finance powered by SAP HANA  Real-time analysis  Real-time reporting  Real-time business  OLAP and OLTP together  SAP HANA Enterprise Cloud for SAP Business Suite on SAP HANA  In-memory platform  Instant financial insight  No aggregates  Single source of truth  Simplified data model  New user experience  Advanced processing  Choice of deployment 2011 20132012 2014 2015
  • 41. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 41Public Click on picture in screen show mode to view video https://www.youtube.com/watch?v=q7gAGBfaybQ
  • 42. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 42Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
  • 43. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 43Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
  • 44. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 44Publichttps://www.youtube.com/watch?v=q7gAGBfaybQ
  • 45. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 45Public How might HANA simplification solve pressing problems? Productivity  Users  Developers Agility  Faster response, time to market  Easier change TCO  Radical simplification of IT landscape NEXT STEPS  Review requirements, look at them with fresh eyes  Revisit ‘keeps me awake at night’ issues  Determine how many could be solved, or assisted by the new capabilities of HANA  Identify those that are now possible and were not before  Identify those that are hard / costly to meet now but could be solved easier / quicker / at less cost, reconsider how to deliver them
  • 46. © 2016 SAP SE or an SAP affiliate company. All rights reserved. Thank You Q&A Henry Cook SAP Database & Technology, HANA Global Centre of Excellence SAP (UK) Limited, Clockhouse Place, Bedfont Rd. ,Feltham,, Middlesex, TW14 8HD United Kingdom Henry.Cook@sap.com T +44 750 009 7478 https://blogs.saphana.com/2016/03/11/hana-the-why/ https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be HANA The Why Video Jan 2016.pptx Mark Mitchell SAP Database & Technology, HANA Global Centre of Excellence SAP (UK) Limited, Clockhouse Place, Bedfont Rd. ,Feltham,, Middlesex, TW14 8HD United Kingdom m.mitchell@sap.com T +44 208 917-6862 www.saphana.com www.sap.com/hana www.youtube.com/user/saphanaacademy
  • 47. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 47Public Speaker Notes Follow • In order to make this presentation self contained the speaker notes for the slides are included • These can be printed or opened in a separate window to accompany the presentation • They are also useful if you want to refresh your memory regarding a particular topic
  • 48. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 48Public HANA ‘The Why’ The purpose of this session is to remind ourselves of the reasons why HANA was invented, and why it is unique. HANA represents a fundamentally new approach to large enterprise systems, one that provides significant simplification. Because it is a new approach it overturns many of the old assumptions we have about enterprise systems and causes changes in technology, applications, systems design etc. Because of this it can be sometimes difficult to “see the forest for the trees”. It is a disruptive technology that is having a major effect on the marketplace. Witness the number of competitors now scrambling to introduce HANA-like features. It should be viewed in the same light as the introduction of the mainframe, the introduction of client/server, the PC and the Internet, and will spark a whole generation of in-memory systems. We’ll hear as we go through why HANA is special and differentiated and why we expect to maintain a leadership position for the foreseeable future. The scene shown is in Hawaii (you can almost feel the warm breeze come through the screen) Hasso Plattner the inventor of HANA and our CTO at the time, Vishal Sikka have both visited Hawaii (Hasso is a keen sailor), and in Hawaii there is a place called HANA Bay. In fact when people talk about ‘the road to HANA’ you can see it here – the road just up from the shoreline is the road to HANA Bay in Hawaii and if you do the trek to the other end of it you get an award for doing so. There are several stories about how HANA got names – one is that it was named after HANA Bay, another that it informally stood for “Hasso’s Amazing New Architecture”. Each year we have been taking HANA customers who are in the 100,000 Club to HANA Bay. These are customers who have taken a piece of production work and used HANA to speed it up by 100,000x or more. So, ‘your mileage may vary’ and we’d not guarantee these kinds of results but speedups of 10x, 100x, 1,000x and more are very typical. https://www.youtube.com/watch?v=VCEr9Y8ZrVQ&feature=youtu.be
  • 49. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 49Public A logical evolution each being necessary to provide significant new business capability and escape the constraints of the past Before we go back to the beginning lets just take stock of where we are with HANA. As you’ll be aware SAP applications have gone through a series of generations, the original system called R/1 Was mainframe based as was R/2. Twenty three years ago the applications in the Business Suite were being constrained, even when we used the largest mainframes available. Note that the ‘R’ always stood for ‘Real Time’ the idea that we could do business immediately with no unnecessary waits. Remember that before this it was usual for applications to be purely batch, using decks of cardboard punched cards or magnetic tape. The early mainframe versions used the newly available online terminals allowing work to be done in real time with online transactions. As larger organisations adopted SAP, and those organisations grew they outgrew the capacity of even the largest mainframes. So, in 1992 SAP made a bold move, and became one of the first large software vendors to move to a completely new and innovative architecture, the “Client / Server” architecture. This allowed large applications to be broken into pieces and those pieces to be spread over different servers. Moreover these servers could be less costly UNIX servers, rather than expensive mainframes. Where we wanted to deploy different modules, or different applications, Enterprise Resource Planning, Customer Relationship Management and Supplier Relationship Management etc this could then be done by placing them on different servers and networking them together. This allowed us to sidestep the constraints of a single mainframe server. There were other benefits too, the Clients in the Client Server relationship no longer needed to be “dumb terminals” they could be powerful and flexible Personal Computers with attractive and easy to use graphical user interfaces – we forget now just what a revolution this was . As we said, for the past twenty three years this worked well, and represents the state of the art of the technology that has previously been available. However, in our view current technology has limitations, particularly in the area of complexity, performance and flexibility. So, we set out to discover if there was a fundamentally simpler way of implementing large systems. It turns out that there is, and this radical simplification is what we now know as HANA. So, for a second time we are bringing in a revolutionary way of doing things, the move to in memory computing. Whilst we’ll continue to enhance and maintain our existing Business Suite, we now have S4HANA the Business Suite which is able to fundamentally simplify how it works and how it can be used, a revolutionary platform which brings significant new functionality, function that cannot be provided without using the unique capabilities of HANA.
  • 50. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 50Public Accelerating business innovation through radical simplification We’ve seen how HANA is now well established, we have introduced it in a cautious and well planned manner, and it is realizing its original vision. Now lets go back and explore why we set off down this path. At SAP, our job is to bring the best combination of break-through technology innovations together to help you accelerate your business agenda With our history and focus on software innovation, SAP is uniquely positioned to help you simplify your IT stack and accelerate innovation However, 7-8 years ago SAP found itself in a bit of a bind. The ERP applications were large and complex with 15 month release cycles. We were surrounded by competition, many of them increasingly more nimble than us. We were dependent on others for data management, some of them major competitors, and all of this complexity incurred great cost. The only way we could see out of this would be if we could radically simplify what we did, and this was the objective of the research project that eventually produced HANA. This entailed hundreds of developers working around the clock for 6-7 years – to get to where we are now. However having done this we are now poised to pull ahead, and stay ahead of the market because we have a fundamentally simpler and more effective base from which to build.
  • 51. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 51Public Traditional ERP system architecture Let’s just recap the history of ERP. Originally we ran on the mainframe and ran a single application. Access was via online terminals (a major innovation at that time) and everything was done in real-time. At the time we were serving mid-sized German companies. But then we sold to bigger companies, and those companies grew, as we did so we needed to deal with “the multi’s”; multi-company, multi-language, multi-currency, multi-application (remember with the mainframe terminals were tied to applications, multiple apps meant multiple terminals) This exceeded the capacity of the mainframe, so we made a big leap, with SAP R/3, to a three tier Client Server architecture, splitting the Client Interface, the application logic and the database access across different servers, thus spreading the load. At the same time we now allowed the newly available PC front ends to access multiple back end applications, instead of having to have one mainframe terminal for each application we needed. In the third step we see how this expanded as new applications were added, some of which we wanted to be able to sell on their own without necessarily having the ERP core, and at the same time we could mix and match servers to tune for performance and also to make operations easier. At the top of the diagram we see the appearance of separate management information systems, data marts and ‘spreadmarts’ the proliferation of spreadsheets that are actually being used as operational data marts in the organisation. These were used by organisations to make SAP data available for querying and reporting. Note that where these different applications were sold with ERP – as was usually the case – a subset of the ERP data was replicated within the applications and there would be feedback processes that fed back to ERP too. In the 4th step we see the addition of a dedicated server for the Business Warehouse, to make management information available from SAP applications, but often this didn’t kill off the separate management information systems, in fact many organisation used BW as a mechanism to populate separate management information systems. There might also be a separate generic Data Warehouse added, to combine SAP data to non-SAP data at the time it was easier to do this by extracting the SAP data and taking it elsewhere rather than bringing the non-SAP data in (we’ll see that this has changed). More servers are added, more data marts and more redundant copies of data – however remember that by going this route we were able to scale the systems and to optimise for individual applications, and, at the time there was no alternative given the technology that was available.
  • 52. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 52Public Transformation through Simplification Returning for a moment to our strategic reasons for producing HANA this diagram shows how a strategy of simplification is allow us, and can allow us to innovate, to renew itself, without having to incur large amounts of additional cost or divert resources from essential development of support. By removing complexity, which saves effort (Services, admin), reduces the amount of hardware required, through landscape simplification and can also reduce the overall amount of software needed we can make room in the budgets to do the new innovative things needed to take advantage of opportunities and to stay relevant. This is precisely the strategy that SAP has been following to escape form the complexity and cost that was weighing us down and which is now allowing us to pull ahead and stay ahead of our competition. If you think about it this is the only way to be able to escape this complexity and cost trap without adding massive extra cost. One of my colleagues coined the term ‘transformation through simplification’, if you simplify then you transform almost by default, in fact it is hard not to, because you begin to find you are using less effort and have less cost and these can be put to work doing the new things you need to do to be able to compete. Our colleague Norman Black termed this ‘transformation through simplification’, and if you think about it if you simplify things you naturally transform things – you can help but do that because you naturally start to do things in the new, simpler, way. Many organizations are in the situation that we were in 7-8 years ago, and likewise many can now take advantage of what we have done with HANA, Cloud, Mobile etc. to move to a simpler, more productive, more agile and less costly way of working.
  • 53. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 53Public HANA Simplification shows up as: We’ll see that HANA brings a much simpler and more elegant approach to large information systems. As we’ll see, we should not be surprised at this as it was the prime objective in its development. As we go through looking at the different aspects keep in mind that we are expecting HANA to provide benefits in terms of productivity (for users and developers), in agility and in reduction in costs. So, it is useful, every now and again as we go through the various topics, to ask ourselves “why does this improve productivity?”, “why would I be able to do things in a more agile way?” and “how does this reduce costs?”. In most cases there are obvious reasons why this would be so. The key to understanding HANA is that it is fundamentally about simplification and this expresses itself in several ways. It can make end users and developers more productive, giving them results quicker (orders of magnitude quicker), being able to develop new useful business benefit bearing function faster and enabling them to do much more work in a given period of time. For developers this is because the system and data structures they are dealing with are fundamentally less complex. For end users they are able to use techniques not previously possible and to get their answers interactively instead of having to wait minutes or hours, thus they work more ‘in the flow’. It makes people more agile by being able to respond faster, whether that is with instant results to and ad hoc query or being able to put together a new requirement much faster than with existing technology, e.g. one customer quoted “we reproduced in two weeks with HANA an application it had taken six months to build on the current systems – and HANA was a better system with much better performance. Developers can bring new applications to production much faster, because the design process is much simpler, developers get instant feedback, and there is less to design and develop. Likewise, where HANA can be used to collapse multiple MIS systems into one, and combine one or more OLTP and OLAP systems together this can bring about a massive simplification and cost saving in our IT landscape. The aims of HANA are to simplify ,and in the process build a better and more profitable business.
  • 54. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 54Public How do traditional systems impede benefits? Before we go on its worth reflecting for a moment on what kinds of problems we encounter in existing systems. One of the things that Hasso and his team spotted early on was that a large proportion of the data structures in an application have nothing to do with the business purpose of that application. Rather they are the ‘supporting data structures’ that have to be added to make the system perform, these are the familiar data pre-aggregates, indexes, cubes , materialised views, pre-joins etc. These can account anywhere from 60% to 95% of the data objects in the application. That is, if you looked at the data model for the application (that is the physical data model that is used to define all the data items) you’d find that on ly a small percentage was basic business data, the data on Customers, Transactions, Products etc. The remaining majority of the data items are there to help ensure performance, for example we might have several layers of pre-aggregation of sales data, daily, weekly, monthly quarterly and yearly. Likewise we probably have pre-aggregation of the different intersections of data dimensions – product by geography, with again multiple levels – product at individual item, product group, product super-group, product class, these can be combined with geographic measures at the location, district, region and country level – all the permutations and combinations. Somebody has to design, develop, test, maintain all of these ! This is a colossal overhead. But is one which we don’t notice because we’ve always had to do it and everyone has had the same problem so we assume that this is ‘just the way it is’ it’s the standard way that we build systems. Similarly, if we want to use multiple processing techniques say database, predictive and text, in current systems these are typically separate systems with separate hardware and software ‘stacks’ to use them together you don’t simply invoke them you have to do an ‘application integration’ job to use them together. Another problem is simply response time. Wouldn't it be great if all operations came back in a second or two ? But they don’t, we are used to delays of minutes hours or days. This destroys the ‘flow’ of management discussions and problems solving. It also makes it difficult to use sophisticated maths to solve problems. – because it takes too long, again we regard this as normal. It’s also hard to combine data from different sources. Even if we can reach out and read data from existing systems this typically doesn’t perform very well, thus we find it hard to re-use data form existing systems and protect the investment in them – and it takes a long time to meet requirements.
  • 55. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 55Public The Motivating Idea We’ll continue this section with a picture from Hasso Plattner’s book on in- memory computing that encapsulates the original aim in its invention. This picture shows “The board meeting of the future”. All the CXO’s are gathered around and they have all their information at their fingertips. By the way, the same applies to all the staff beneath them, too, the staff in branches, on the road, middle management, supervisors, field staff and call centre operators, everybody has direct and immediate information. Operational data is completely up to date, to the second, there is no need to wait for complex month end runs we can keep pace with what customers are doing and what is happening in the supply chain second by second. Not only that but if an analysis is required it can be performed then and there, within seconds, no need to gather data and meet again next month, analysis no matter how complex can be performed sufficiently quickly to provide input into the conversation as it happens, and that analysis is done on complete information that is known to be up to date. There is no disagreement between the operational and analytical systems, because they are one and the same system. Clearly it will take some time to get there, for customers to implement the complete vision, but I think that you will soon see that this is where we will get to, and why HANA and in-memory systems are uniquely able to get us there. In fact, if you look at what we are doing with HANA Live and Simple Finance you will see that we are a substantial way along the road. SAP HANA provides a fundamentally new capability. A company that can operate in this mode, with the agility, control and cost effectiveness that it implies will have a significant advantage over a competitor company that does not.
  • 56. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 56Public History 2007 – The Business Goals of HANA What we see here is a slide that was used by Hasso Plattner, Chairman and one of the original founders of SAP, at the launch of Business Suite on HANA in 2013, and which outlines the original design objectives for HANA. Hasso had founded his own university, the Hasso Plattner Institute in collaboration with the University of Potsdam and was lecturing on enterprise systems. As part of this he discussed with his PhD students the design of enterprise systems, and his students being bored with learning about old fashioned ERP systems wanted something more modern to study and discuss. Hasso set them the task of working out what a truly modern enterprise system should look like if we could start with a clean sheet of paper and incorporating all that we now know about systems design. The title here talks about ERP, as this was the Business Suite on HANA launch, but the actual objective was really to figure out what any modern enterprise system should look like. It is noticeable that the objectives are mostly business objectives. A good way of thinking of this is that the objective was to remove “all the things that have bugged us about big system for the last 20-30 years”. For example we split OLTP and analytics apart over 30 years ago, because it was not physically possible to combine them on the system, this was done two generations ago and we’ve forgotten why we did it, it has just become common practice. Likewise we’d like to get rid of batch, do BI on demand against the transactional system, not need aggregates, be able to use heavy duty maths etc. etc. We’d also like a sub-second response time, because that is the way that human beings are wired, they are maximally productive when they don’t have to put up with wait times and delays. Of course, on the way through we’d expect to make use of techniques such as parallel processing and holding all our data in memory.
  • 57. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 57Public HANA Techniques: How to Build a New Kind of Enterprise System This is a representation of many, but not all, of the main techniques pioneered by SAP HANA. Some are adaptations of previously know techniques such as Massively Parallel Processing and Column Stores, some, like the updatable column store were totally new innovations. However, where existing techniques were used typically HANA would take them a step further and, equally important, these techniques were combined in a particularly innovative manner. This took over six years of focussed R&D. The development method itself was innovative, it was a distributed development project using top talent from around the world. When the developers in Europe went to sleep, those in Asia would wake up and continue, and when they reached the end of their day they’d hand over to developers in the USA and Canada. This went on for several years with hundreds of developers keeping development going literally around the clock. Other vendors typically took 10 elapsed years to do what we did, we did it in 3 by having a 24/7 continuous ‘shift system’ HANA is known as an ‘in memory database’ but it is worth noting that of all the techniques only one of them mentions being in-memory – the “no disk” technique. This is a necessary part of the system but as you can see is by no means the full story. Also the way in which we use the techniques can be different. Column Store databases had appeared before and shown their benefit and high efficiency by only transferring data in the subset of columns needed for a query and by allowing very efficient compression. However, HANA takes this further, it uses column stores not just for these traditional reasons but to ensure that by filling the column stores with binary encoded data (on the slide as “lightweight compression”) it can keep the Level 1,2 and 3 on chip caches filled and thus make full use of the SIMD and vector instructions available in modern CPU’s. This is how we get 3.2Bn scans / core / second and that in turn means we can dispense with aggregates and indexes. HANA is a general purpose database, so another unique innovation is the ability to do transactional updates against the column store. As we mentioned, and as you can see here ,there is a lot more to HANA than simply putting our data in memory. Modern workloads consist of more than just database work, there is also text processing, spatial, planning, OLAP, etc. etc. Therefore we have specialist engines for each of these that can collaborate within the in memory system, and we have an innovative language that allows us to process in all these different languages whilst keeping the procedures together, being processed in memory, in parallel.
  • 58. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 58Public https://www.youtube.com/watch?v=jB8rnZ-0dKw Pepsi Ad If we think about our slide “HANA Techniques: How to Build a New Kind of Enterprise System” which shows all the techniques that are used by HANA, both those that are adopted, and those that we invented. There is an analogy with an award winning series of adverts that Pepsi had in the 1970’s, which summarised all the things you associate with Pepsi in one high energy snappy sentence. The theme of the ads, and the strapline that went with it was “Lip smackin, Thirst Quenching, Ace Tasting, Motivating, Good Buzzing, Cool Talking, High Walkin, Fast living, Ever Giving, Cool Fizzin Pepsi!” Pepsi, fizzes, but they didn’t just call it “Fizzing Pepsi” - that would be selling it short. Glancing back at the previous slide we see that we have a “Massively Parallel, Hyperthreaded, Column Based, Dictionary Compressed, CPU Cache Aware, Vector Processing, General Purpose Processing, ACID compliant, Persistent, Data Temperature Sensitive, transactional, analytic, Relational, Predictive, Spatial, Graph, Planning, Text Processing, In Memory Database HANA!” Every one of those things contributes to the unique thing we’ve done. We’ve shortened that for convenience to “The In Memory Database HANA”, but we should never forget that there is a whole lot more to it than just “In Memory”.
  • 59. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 59Public HANA is designed to be more than just a Database First lets do a quick recap on what HANA is – what would turn up on your floor or in your cloud if you ordered one? Don’t worry about the detail for now, this is just a quick note to fix in our minds what HANA aphysically is and isn’t, we can come back to the detail later on. HANA is an appliance a combination of software and hardware, the hardware being available from multiple suppliers, though it can also be supplied now using cloud, virtual machines or a ‘tailored data center configuration that can make use of existing disk. It adheres to the appliance philosophy of being pre-designed, pre-built, pre-tested and can be delivered ready to plug in and go. Where we differ from other vendors is we believe it is no longer necessary to prescribe a single type of expensive premium priced hardware. It has standard interfaces to the outside world that make it easy to integrate with your IT estate. HANA is not just a database. It has a very modern style of architecture in that it contains a number of collaborating ‘engines’ each aimed at a particular style of processing. The goal is to enable all the different types of processing that applications might need to do and to do them within the database, close to the data, and feeding the local caches within the modern CPUs so as to fully realize their enormous processing potential. This includes development life cycle tools. Also, Text processing, the ability to do sophisticated text search and sentiment analysis. There is support for multiple programming languages, including sophisticated SQL scripting languages and support for analytics, including business function libraries and the open source ‘R’ statistical suite. There is a planning engine for specialist planning functions, in particular aggregation and dis-aggregation. This allows pretty much any type of application logic to be sunk down into the database – and thus fully benefit from the 100,000x processing gains. We also support federation capability to seamlessly pull in data from remote databases and other systems such as Hadoop. This is natural, and just an extension of our multi-engine design, our query planner and optimiser simply makes use of other ‘engines’ that are outside of the in-memory core. So, what we’ll expect to see is certain applications benefit straight away, and others benefit more as they start to exploit this way of working. OK so now we have a clear idea of what HANA is lets go back to why it was invented.
  • 60. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 60Public There has been a revolution in hardware Computer processor technology has changed out of all recognition over the past few years. Twenty years ago we might have a system that had four processors, each of which ‘ticked’ away at 50 million ticks per second (50MegaHertz). We’d most likely have one Gigabyte of memory, the processors would be built using one million transistors. This would be a processor or approximately the Intel 486 class. Also these four processors would actually be four separate chips. These days the numbers are very different. The individual processors would ‘Tick’ at a rate of 3 billion ticks per second (3 GigaHertz), each chip would have 2.6 billion transistors, each with multiple processing cores. To put this in perspective, if transistors were people the old CPU’s with a million transistors would be equivalent to a city such as Birmingham UK, or Brussels Belgium. As single modern microchip would represent one third the population of the planet. A single server might be 8 CPUs of 15 processing cores each, totalling 120 processing cores, and this would access 6 Terabytes of data, that is 6,000 Gigabytes. So whereas 20 years ago a typical business system might have four separate processors, each ticking away at 50 million ticks a second, pretty soon we can see we’ll have a simple sever that will have 480 processing cores, each ticking away at 3.4 billion ticks per second. So we have over a hundred times the processing elements and those processing elements tick away over seventy times faster ! Plus, as we’ll see there is another wrinkle too which we’ll discuss in a minute – these new processors can process many data items at a time with each instruction, the old ones could handle just one at a time. This means we have over 100,000 times more compute power available on a single simple server! And, of course, this is very cost effective as it’s all commodity (Intel) microprocessors., cost performance diminished by something like five orders of magnitude. Thus, computers now are completely different animals to what they were 20 years ago. In fact it is misleading to use the same words “CPU”, “Processor” etc to describe these wildly different things. One of these modern servers is the equivalent of several data centres from 20 years ago. The question is, how can you tap into this amazing computing power? As we’ll see you can do this but you have to do it in a completely different manner to how traditional systems have worked . Remember, this trend has only really taken off in the past 5-8 years so any software written before then will typically not be using the techniques needed to fully exploit this opportunity.
  • 61. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 61Public HANA, works in a fundamentally different way to other systems, this initially looks complex, but is actually easy to understand. Firstly, don’t’ panic!. This may look very technical but it is actually very easy to understand so please just go with the flow and stick with this! You will understand it, and thus understand why we are so different, and so are able to deliver major new business capabilities – plus you’ll be able to impress people at dinner parties with your knowledge of modern microelectronics!  This diagram, using information from Intel, shows the relative access time and data speeds for different parts of a typical computer system. It’s a little bit out of date, but the principle idea remains the same. OK so what did we do with all these transistors ? The diagram shows three Intel ‘Xeon’ chips, these are the blue boxes. In the middle chip we are looking inside the chip to see two things, multiple processing cores and the special memory caches (Level 1, Level 2, Level 3) contained in the chip. CPUs used these extra transistors to have more cores, more processing elements on each chip, starting with two, then four then six, we’re now up to eighteen cores per chip and we expect this to increase further. So each chip now contains many processing elements. A modern system is made up of different parts that are interconnected, typically CPU, memory and disks. In addition different processors can also talk to each other. There is the CPU, the memory embedded in the CPU, the separate RAM memory, other processors and disks – both hard disks and solid state disks. The red labels shows the length of time it takes to get to the data and the green shows the speed in Gigabytes per second to transfer data. A nanosecond is one thousand millionth of a second, or one billionth of second. Mechanical or external devices can’t keep up with speed of modern silicon. To get data from the fast memory caches (levels 1,2, and 3) on the chip takes just 1.5 or 4 or 15 nanoseconds – very, very fast. But to get data from a hard drive takes an enormous time - ten million nanoseconds. Even Solid State Disks,external devices, take 200,000 nano-seconds. So to keep all those many processing cores fed with data and thus use their full amazing processing potential, then we need to use the on board cache memories, and to keep these cache memories full. In order to do this the software running on the chip has to be aware that these cache’s are available and to be written in a way that can fully exploit them, this is what we mean by ‘cache aware’. Software that was written before these memories appeared don’t know they are there and can’t exploit them, you need change the way the software is written to make full use of them, its hard to ‘retro fit’ this way of working onto a traditional system. Plus we can do this very cost effectively using commodity CPU and memory chips.
  • 62. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 62Public A Useful Analogy We are not good at thinking in terms of billionths of a second, since that is so far away from our day to day experience. So here is a good analogy thought up by one of my German colleagues. Imagine we are sitting in a house in London, enjoying a drink. In this analogy we substitute beer for data. The Beer you are consuming is the data in the CPU, it is immediately available and being processed. The beer in Level 1 cache is the beer on the table, within easy reach, 1 metre away. If we need more then we can go to the kitchen refrigerator, 4 metres away, this is like level 2 cache Then there’s the refrigerator in our garage not more than 15 metres away - level 3 cache. Up to this point we are still just using beer in our house (that is data from memory on the CPU chip) we have not even yet gone to DRAM, that is left our premises. If we need more than this then we can go down the street not more than 40 metres away, fortunately we are next door to a liquor store – that’s our RAM memory. But what happens if we run out of beer (data) and have to go further to the bulk store –to the brewery warehouse? – the equivalent of the hard drive.
  • 63. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 63Public A Useful Analogy What happens if we run out of beer (data) and have to go further to the bulk store –to the brewery warehouse– the equivalent of the hard drive. In that case we have to go to Milwaukee, USA – 6 million metres, or 6,000 Km away !!! (Of course if we wanted to save some time we could use SSD, and just go to the south coast of the UK! , that would reduce the distance to just 120 kilometres to get our next beer. What this shows is the huge difference in the ability of silicon to process data and the ability of mechanical devices to feed them – all of which has happened in the last 7-8 years. Software written before then cannot exploit these features because they don’t know they’re there. Where these techniques are starting to be used by others they are typically ‘bolt-ons’ to existing complex systems and have various restrictions imposed on them. Rough approximations – but they give a good sense for the relative distance. (Check for current numbers, they are improving all the time) ns m km CPU 0 0.00 0.00 L1 1.5 1.00 0.00 L2 4 2.67 0.00 L3 15 10.00 0.01 RAM 60 40.00 0.04 SSD 200,000 133,333.33 133.33 HDD 10,000,000 6,666,666.67 6,666.67
  • 64. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 64Public When you store tabular data you get to store it in one of two ways We’ve now established that to get access to the amazing speed of modern processors we have to use all those multiple cores, and feed them via the cache memories held within the chips. Column Based data stores are one key technique that helps us do our work in-memory, they have become both proven and popular in recent years. We tend to hold our data in tabular format, consisting of rows and columns, this is the format used by all relational databases, and this is the way HANA represents data too, in this very familiar and standard format. When you store any data in memory, or on disk, you need to do this in some kind of linear sequence where data bytes are strung out one after another. You can either store the data row by row (most databases do this). Or you can store the data column by column. We see this illustrated above, we’ll now explore the implications for each, and most importantly how this affects our ability to exploit these modern advances in computer chips. This may not be immediately obvious, but it soon will be.
  • 65. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 65Public For rows, data is moved to the processor in ‘chunks’ but only a tiny proportion of those ‘chunks’ are useful Here we see the data laid out and physically stored in rows, in the more traditional manner we’ve used for decades. Using this row based format we have to skip over the intervening fields to get the values we want. E.g. if we want to sum the number fields highlighted with red boxes above, first we read the first one, then have to skip over some fields to find the second one, then skip over some more to find the third and so on. These rows can be hundreds of attributes long, each row may be hundreds or thousands of bytes, 1,000 bytes would not be unusual. Processors typically fetch data in chunks, and bring them to the processor to have computations done on them. In this diagram the alternating blue and green lines show the successive ‘memory fetches’ which are retrieving data ready for computation to take place. A processor typically fetches data from cache memory 64 bytes at a time. But a row may be 200, 300, 500 or more bytes long. Therefore it is going to take many fetches to get to the next useful value to be added, so most of the time we’re skipping over ‘padding’ between the useful values. All the while this is going on the processor is ‘spinning its wheels’, ticking away waiting for the next chunk of data that has a useful value contained within it to operate on. So, to run at full speed and get the maximum out of these fast chips its not enough to have many fast processors, we also need to make sure that the next data that the fast processor wants to process is sitting waiting for it in the cache memory and will be retrieved as soon as the processor is ready to process it.
  • 66. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 66Public With data stored in columns “Every Byte Counts” Now consider the column based format. Here the number fields are held together one after another, in one big column, as are the other column values. This leads to a very ‘dense’ data format that is easy to compress and where ‘every byte of data counts’. We can see how it is easy to take a whole column (which may be millions or billions of items) and simply feed it into the on-chip memories in a continuous flow. In this way the on chip memories are continually kept full. A processor is never kept waiting, each time it is ready to process more data there is a cache full of data close by, thus we make full use of our fast modern CPU processor. Thus Column based storage is not just a way of efficiently organizing data – but it is key to being able to feed modern CPU’s with sufficient data to keep them busy, every byte in a column store counts so if we can fill our local caches with it we can process them very fast – on top of the benefits we might get from compression. The CPU cores are ‘ticking away’ at 3-3.4 billion ticks per second, and we are making full use of that incredible speed. Not only that but processors now have special instructions that can processes many data items at a time, say 10 or 20. We can do this because all the values are sitting tightly packed together, so we can grab them several at a time. With this design each time the processor is ready for another 10 tightly packed data items they are already sitting waiting in the cache memory on the CPU, our very fast CPU cores, which can process multiple values at a time never have to wait for data, and thus we get the full use of them. This is where we get the 100,000x performance increase that is the key to everything else we do. With this 100,000x performance increase we can do away with the need for aggregates and indexes and thus massively simplify how we can build our applications. (Note that simply being a column store does not mean another database can do what we can. Column stores were already used before, but simply because they reduce the need for disk IO when the columns are on disk. If we query a table of 100 columns but only need to process say two columns then we only need to retrieve 2% of the data in the table, this speeds things up by 50x (100% / 2%) but does not get anywhere near the performance we see by using our techniques). This is the key reason we use column based data – because of the fit with modern microchip architectures.
  • 67. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 67Public Columnar data stores offer huge benefits, if you can use them in a general purpose way Of course using column store data also provides us with the benefits of easier compression (because all the data in a column is of the same type) and the ability to add new columns non disruptively that is without affecting those already there. Column stores had already come into use for analytic applications they were so efficient for this kind of working, the data compressed well, the data was tightly packed together and you’d only retrieve from disk those columns mentioned by a query. So if our query only looks at three columns in a hundred column data we only have to scan three percent of the columns. This saves a huge amount of disk IO and data movement, hence the query speed up. But it doesn’t get us anywhere near the 100,000x speedup we see through the CPU cache working. But it is the ability to use the local cache memory for our main computation, and thus make full use of the potential of modern microchips, that is the important concept here. In order to be able to fully take advantage of modern CPU’s in this way we need to be able to use this technique across a full range of workloads. OLTP, OLAP, Text, Predictive, and it turns out that this technique is suited to all of these too, of course you need to design your system from the ground up to do this. In the past column based storage has performed poorly at updating, therefore SAP invented a way of doing high speed updates against a column store – this is a unique capability, and it is this which makes the use of column stores general purpose, and we can also use them for text processing, spatial, graph, predictive etc. etc – and any mix of them. This means that whatever components of a modern workload we have we can get the full benefit of the modern CPUs across the full range of workload we might wish to do. All data is held this way, and all processing makes use of it. Other systems are starting to use these techniques but often they are ‘bolted on’ to the existing database, so all their traditional cost and complexity are still there. You have to nominate which data to hold in memory, usually its used only for read only queries, you have to do your updating somewhere else and other styles of processing like text, spatial, predictive can’t use these techniques. So you don’t get the simplicity of design and development or the runtime speed advantages.
  • 68. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 68Public All those extra transistors can be used in a number of ways to speed up computation and throughput – by 100,000x or more So, to summarise, and see the whole flow within the system, we’ve seen that whilst we cannot speed processors up any further we can put more and more transistors on them, lets consider how we might increase processing power by using those extra transistors. Firstly we can put more than one processing core on a single CPU chip. This is now well established, we started with dual core chips, then went to four, six, ten and now we have fifteen. More processors mean more computation done – provided our software can exploit them . We want to keep those cores busy, so we can use other sets of those transistors to implement very fast memory very close to the cores, actually on the CPU chip itself, this means cores seldom have to wait for their next batch of data provided the software knows how to do this. Likewise if we look at each individual core we can see that we could add some local cache to each core, in fact we can implement two levels of cache, one feeding the next, that means that within each core, when we are moving data into the processing registers the data is highly likely to be right there, so the computer instructions never have to wait for data, its been pre-positioned into those very fast memory caches right next to the registers. If we go down the next level we can see opportunities to turn extra transistors into processing power there too. Traditionally a processing element, using its register would process one value at a time. An individual computer instruction would load a single value, do something with it, like add another single value or compare it to another single value, then take the single value result and store it somewhere. But what if we used our extra transistors to widen the register, say from 32 bits to 256 bits, and what if we implemented special instructions that could load, add and store multiple values into the register? Then with each instruction we could process many more times data. Of course we’d be loading data values at a very high rate, processing them ‘N at a time’ – but we can rely on our close memory caches to have the data we need ready so we don’t spin our wheels waiting for data, again provided the computer software knows about these features and how to use them. Software created 8 or more years ago will typically not be able to exploit these features, because they did not exist at the time. This would be very difficult ot retro-fit to an existing row-based and disk based database – you really do have to start with a blank sheet of paper.
  • 69. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 69Public HANA Memory Data Flow In this view we show the flow of data from the solid state memory (DRAM storage) into the registers where its processed. We can see from this how it is essential that we hold our data in columns. If we do this we can readily see how we can have a stream of compact, tightly packed data values constantly flowing into the Level 3, level 2 and level 1 cache’s on the CPU chip and within the processing cores, keeping them filled so that the processors never have to wait for more data. Modern CPU’s have registers that are wide, 128 or 256 bits and special instructions that can process many values at once if they are packed into a register together, so we might be able to process data, with a single instruction, ten or more at a time! Whenever the register needs another 10 values, say, then they are ready waiting. This means that the very fast (3.4Ghz, or 3.4 Billion ‘ticks per second’) CPU’s we have available can be fully utilised because they always have data ready to process and thus, unlike other software, we realised their full power. To make use of vector processing we have to compute with the data held in binary format – and we do this by using dictionary compression which both compresses to save space but more importantly provides us with a binary representation of the data that we can compute with. For example when we execute comparisons where we need the order of values to be preserved. If the data is not compressed in this way we cannot make use of the vector instructions and we would not be able to keep the level 1,2,3 caches filled with data ready for the CPU’s (if the data is not in binary format and is therefore larger then it takes longer to transfer the data or put it in the right format, whilst this is happening the CPU would be idle, twiddling its thumbs - HANA’s dictionary compressed cache aware processing avoids this loss of efficiency. The dictionary compression also makes the data smaller of course, so we can fit more in the DRAM and also more in the CPU caches – but this is really a beneficial side effect that makes our in-memory system even more economic by needing less storage. The key reason we use dictionary compressed data is this it allows us to follow the chain of logic that enables us to use the very fast and efficient vector processing we mentioned at the beginning. We have customers that report processing speedups of 100,000x or more. We can ‘spend’ some of this huge speed advantage by trading it off against the need to pre-compute aggregates – if we do away with these then we do away with 60-95% or more of the complexity of the application it becomes much simpler, smaller and more elegant and the business benefits of productivity, agility and TCO flow directly – but to do this you have to do all of that which we’ve described.
  • 70. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 70Public What SAP means by “in-memory” can be very different to what others may mean So, from the preceding information we can see that when we talk about HANA being an “in memory” machine we are talking about completely different types of memory being used in completely different ways to traditional DRAM. The key point here is that the cache memories in the CPU and the ordinary Dynamic RAM (“D-RAM”) memories are completely different things. HANA is designed from the ground up to do its computation in the cache memories and thus gain access to very high speed computation, and we do this for all our processing. Other systems simply put data in DRAM to save IO’s or make limited use of the CPU cache’s just for limited tasks. They are completely different techniques. In the past database vendors have used larger amounts of DRAM to hold data from disk to avoid having to incur the overhead and time delay of disks. There is nothing wrong with this, and it gives you a good speed boost. This is a well understood technique that has been used for many years. The more data you hold in the memory buffers in DRAM the fewer times you have to incur the delay waiting for disk and everything speeds up. But it does not give you 100,000x speed boost we see with HANA, the speed boost that is needed to considerably simplify our systems. When we talk about “in-memory” in relation to HANA we are talking about the Level 1, 2 and 3 caches inside the CPU, we mean taking the column store data and ensuring the caches are constantly kept full, and being able to take our dictionary compressed binary data many values at a time to fully utilise the vector (SIMD) instructions that can process data many items at a time with each instruction. This is a fundamentally different way of doing our computing and to do it we needed to design HANA from the ground up. Another key point is that not only do we fully exploit these new CPU features that have only appeared in the last 7-8 years but we do this for ALL our processing, that is we have a system that can take advantage of this much more efficient way of doing things for relational database, predictive, planning, text processing, etc. etc. What is even better we have invented a way of being able to store the data in a way that allows us to do transactional processing on it too – so we get the speed advantage and can apply it to our whole workload. This means that we don’t have to think about which part of the system to use for what part of the workload, we can use it for everything we may wish to do - a considerable design simplification and source of productivity and agility. So simply saying that something works “in-memory” is not sufficient, we then have to ask “Which memory do you mean exactly, how do you use it and what can you use it for?” The answers you’ll receive will be very different for different products.
  • 71. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 71Public Traditional table design in disc-based RDBMS – many tables Complex assemblies of tables complicate BI development and change Lets just recap on what is happening under the surface of the traditional way of doing things, where data is stored as ‘rows’ or records of data, with many attributes held for a row. Here we see a large master record containing many attributes surrounded by other tables, lets explore what those tables are. The master table itself will have many attributes, possibly several hundred, they may represent attributes used in many different industries, many not relevant to a particular company. The turquoise block represents a one character indicator that we’ve just chanced, say “1” to “2”. To do this we have to update the entire record, which may be 2,000 characters long or more. We also have to write an equivalent sized audit record, or maybe we can get away with keeping just a subset of the record, as we see at the top. On the right hand side we see extra tables where we keep pre-aggregated and summed results, but of course if we update the base record we may have to update these tables too as various SUM’s and COUNTs may have changed, these are application tables, but in the background we may also have similar tables helping the database manager produce results and they have to be updated too. We see our turquoise blobs appear where updates may be necessary. On the left hand side we see a similar structure but this time for secondary indexes, that is if we have many invoices mentioning products we may also wish to quickly find out which products are in which invoices, so we have tables, maintained by the application, which tell us this, these may also need to be updated. Also we may have similar indexes maintained by the database rather than the application and these need to be updated too. To summarise, we need a multi-table assembly to get the required business information, and we need auxiliary data sets supporting disc-based row table update to achieve acceptable write and read performance, these are complex and time consuming to optimise. We also needed a complex ABAP development and/or complex data model to be able to maintain these structures and keep them synchronized, and also to navigate them in order to do reporting and querying upon them. This is Inherently, complex and require exhausting unit and integration testing efforts across the complex data set and table assembly to check any “small” enhancement request of the business user community, almost making it impossible for any ad-hoc, on the fly change, and also making it very difficult to provide timely changes to the system and making very costly the efforts to deploy upgrades and enhancement releases. This way of working maximizes the constraints to deliver innovations to the business. However, with traditional disk and row based technology there is not choice this is the way it has to be done.
  • 72. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 72Public S/4HANA design concept makes use of the much simpler and elegant columnar data method S/4HANA design concept is based on inserts only the single changed value only into the in-memory columnar store for the field where the change occurred it works like this. Each attribute has its own column of data, it no longer physically shares a row with other attributes (but we know which attributes constitute a row – they are the N’th value in each column for the N’th row). When an attribute is modified we don’t overwrite the old value, instead we insert a new value and mark the old one as changed. We have written one piece of information and modified another – that’s it, we’ve avoid to write whole rows. Because we still have the old value we can reconstruct the old rows for audit purpose at any time, in fact we can do that for any point in time. Thus we don’t need all the complex audit data structures of whole extra rows or partial rows. This is all done live and in memory, of course, we write the change to a permanent log in case we need to recover but that is fast and simple. (Note: For the technically minded SAP HANA adds a validity vector to the tuple, the individual value, to identify the most recent record from a time line perspective, the validity bit is read during a query execution to speed up the overall read dramatically. The bit vector is a technique to speed up the read of the differential buffer where changes are kept for the uncompressed delta changes, so that the combined read across the differential buffer and the compressed main memory is much faster than the read of a row record on disc or even in memory, as the system can perform a table scan. For each record, this validity vector stores a single bit that indicates if the record at this position is valid or not. The vector remains uncompressed to ensure proper read and write speed). We don’t need indexes as every column effectively works as its own index. If we want to pick out particular values we scan the dictionary for that column, pick out the bit compressed values we need then do an ultrafast scan of the column, no separate data structures are needed. Likewise we no longer need to pre-aggregate data, we have sufficient speed to be able to roll up and data we wish on the fly. This is pretty abstract so lets look at the practical consequences of this by looking at our Logistics application.
  • 73. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 73Public S/4HANA Example of dramatic simplification: Logistics - Materials Management Here is the Logistics application before and after conversion to the simplified version using HANA. At the top of the slide we see the original data structures needed. These include all of the kinds of things that we spoke of earlier, not just the main table (MSEG) but all the other tables needed to support it, all the layers or aggregation, secondary indexes, etc This came to a total of 28 tables to be created, maintained, tuned, administered and changed over time. At the bottom we see the new application data structures, basically a single table. Think of the implications of this in how easy it is to introduce new innovations and how simple it is to create new reports that use the data without having to worry about the impact on the other 27 tables that are now no longer there. Think about now less error prone this is, how quickly changes can be made and how much less testing and administration is needed. Therefore how much easier it will be for us to introduce new innovations on top of the ones already enabled by HANA. This is what allows us to pull ahead of the market and stay there. We will simply be better able to innovate than companies reliant upon previous generation technology, and our customers will benefit by being able to differentiate themselves in ways that can’t be provided by other software applications.
  • 74. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 74Public Example of massive simplification: SAP Accounting powered by SAP HANA This exploitation of modern microchips and the incredible speed that it makes possible allows us to do away with pre-aggregated data, indexes and other ‘supporting data structures’. A good illustration of this is what we have done with our Simple Finance application. Please note: Whether you are planning to use, or considering our Finance application is not the point here, here we are using it simply as an example of how we can simplify a major application. It was the first of the SAP ERP applications to be re-engineered to use the unique capabilities of HANA. It is one of our most important innovations, and was introduced in 2014. It is essentially an entirely new architecture for SAP ERP Financials. Its main characteristics are:. • Convergence of FI and CO. There will be one logical document that is the common basis for both regulatory and managerial accounting. Thereby, a much higher degree of consistency between FI and CO has been created abolishing the need for time consuming and error prone manual reconciliation. Since they are now the same record they can’t be different. • Abolishment of pre-defined aggregates. All aggregation and transaction processing is now performed ‘on the fly’ based on HANA Live views. The totals and indices in FI and CO are a matter of the past. This fact is a further move to preserve data consistency throughout the entire technology stack. As a side effect and due to HANA, memory consumption is drastically reduced, helping to reduce TCO. More flexible reporting. As a beneficial side effect, reports can now be configured by the end user in a very flexible way without requiring any assistance from IT. We can report on any attribute. Large customers in particular are interested in leveraging Simple Finance on SAP HANA for harmonized real-time internal reporting (the so-called central journal / finance approach) – prior to consolidating their full system landscape into a single ERP system.
  • 75. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 75Public Finance System – Before HANA Here’s a good concrete illustration of what we mean by complexity, and how we provide radical simplification. Under the Financial application there are about 23,000 ‘data objects’ these can be tables, aggregates, indexes etc. Every little blue rectangle on this diagram represents ten objects – so in fact this diagram is considerably simplified ! Now lets consider a business operation such as a corporate restructuring, an acquisition, or a demerger. How will that affect the values that have been pre-calculated and deposited in all these physical data objects ? The only way to be sure is for someone to go through them and think about them, design any changes, develop the change mechanism, unload and reload and recalculate the data. Clearly a very onerous task and one that therefore is a significant impediment to making organisational change. Remember, when we are evaluating say three scenarios we need to work out which subset of these physical data objects are needed to represent those scenarios. We have to replicate these three times and load them with data structured according to that option. When we’ve done that is scenario three better than scenario one because it’s better, or does it just look better because something changed in the data in the five weeks it took to load everything? By the way, when we do this we also need to worry about which objects can co-exist with other objects on our disk storage, which can and should be separated in order to make loading or querying more efficient. So what is the implication of the simplification that we have done ?
  • 76. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 76Public Finance System – After HANA In the simpler Finance application we have removed 95% of all the data objects !!! That is we have removed 95% of all the things we have to worry about and maintain in the application. The function remains exactly the same. In fact we can now do things we could not do before because we have been freed from being slowed down by the complexity. It is intuitively obvious that the diagram above will be significantly easier to work with, we will be able to deliver function much faster, at lest cost and with much greater flexibility. If we want to change company structures and roll them up another way then we just go ahead and do it – there is no need to unload and reload the data. If we want to represent the company in several different ways we can do that too because there is no cost in defining the different structures because we don’t’ have to store the data. This will help us be able to change our organisations ‘on the fly’ and move toward continuous intercompany reconciliations, and continuous close – and many more functions besides. It is worth flicking back and forth between the last page and this and just thinking about the implications for development, for system change and administration. SAP invented HANA to make it possible for us to introduce new applications and change our existing ones much faster and more easily and from the above you can see how we succeeded – and you can do the same to make your organisation more productive and agile – and to lower costs.
  • 77. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 77Public SAP HANA More than just a database – a platform Of course we need to build these capabilities into a complete working system, and we have done this with what we call the SAP HANA Platform. At its core is the in memory computing engine we’ve discussed, but this is complemented by extending this with large scale disk based column stores, stream processors, data replication, extract transform and load capabilities and easy integration with open source systems such as Hadoop. The modern architecture that contains a number of different complementary engines. The core engines run in memory to provide the simplicity and agility that in-memory gives, but it is extended by many other engines external to the in-memory core, engines for multi-tier storage, Hadoop as an engine for large volumes of unstructured data, engines that happen to sit in legacy systems, and specialist engines such as for streaming or for communications with mobile devices. This architecture is simple, elegant and modern. It allows for any mix of processing, provides very cost effective IT and yet gives you the productivity and agility advantages of in-memory. These are generic advantages and the can be used for both SAP applications, non-SAP applications or mixtures of the two. This allows us to use the platform to address a wide range of information requirements and match the right kind of tool to the right job. It allows us to very easily integrate the SAP HANA Platform into existing IT estates, complementing what is already there, and then meeting requirements in a simpler and more cost effective manner. We’ll not dwell on this now, as SAP has many comprehensive presentations to take you through the SAP HANA Platform, the point here is that having invented a fundamentally simpler and more effective way of building enterprise systems we have built this out into a complete platform.
  • 78. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 78Public SAP HANA, the great simplifier of enterprise software The vision of SAP HANA has been introduced in cautious well considered steps over a number of years, introducing new function gradually and fully exercising the core platform. In 2011 (actually 2010) we released HANA simply as a stand alone analytical platform. From 2012 we could drive real-time operational reporting and simplify how they run their SAP BW. SAP BW powered by SAP HANA has already exceeded +1600 customers From 2013 our customers could start working as real-time business and simplify how they run their SAP Business Suite by bringing transactions and analytics together into a single in-memory platform. SAP Business Suite powered by SAP HANA has exceeded +1800 customers in only two years (one of the fastest growing product in SAP’s history) Business Suite, specifically enabled by HANA provides our customers with significant benefits through massive simplification, much greater flexibility and much greater performance – and all at less cost. More than 5800 customers have already adopted the platform to do just that with our existing applications optimized to run on SAP HANA: In June 2014 we enabled people to drive real-time insight across finance and simplify how they run their finance system with SAP Simple Finance powered by SAP HANA, simplifying the application by 95%, removing 22,000 data objects, removing aggregates, data redundancies and replication With S/4HANA, we are building on the success of the SAP Business Suite powered by SAP HANA with a completely new suite. S4HANA is only built on SAP HANA because only HANA can deliver the level of simplicity required by customers today: Now in 2015 S/4HANA is natively built on SAP HANA for massive simplifications (simplified data model: no indices, no aggregates, no redundancies), also using SAP Fiori offering an integrated user experience with modern usability (these interfaces are role-based, with 3 steps max to get the job done, developed ‘mobile- first’, and offering a consistent experience across the various Lines of Business. S/4HANA is natively built for advanced innovations (e.g. new applications predicting, recommending, simulating / processing of text, geo data, graphs, genomes). Also, S/4HANA is natively engineered for providing choice of deployment (on-premise, cloud, hybrid). In addition S/4HANA fully leverages the new multi-tenancy functionality enabled by SAP HANA
  • 79. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 79Public Some facts about SAP S/4HANA Now that we’ve established how we’ve been able to do something very different, to fundamentally simplify enterprise systems lets look at how this expresses itself in our new generation of applications, and see how a few current facts about S/4HANA speak for themselves. It is clear form these that we have succeeded in the vision that we set out to realise. S/4HANA is a new code base, and with Fiori a new User Interface (UX), plus new ‘guided configurations’ that help with adoption. These simplified applications are seamlessly integrated to offer one solution for a wide range of business problems. All these application get completely a modern web-based Fiori User Experience – ready for real cloud consumption All these capability taken together makes these applications a completely new product: with new database, data management, technology and front-end. A major achievement is the ability to reintegrate ERP, CRM, SRM, SCM, PLM in one system - to save hardware costs, operational costs and time. This is possible because S/4HANA has a 10x smaller data footprint compared to a best-in-class business suite on traditional database. But remember now we implement these on platforms that are large scalable clusters of processors, we have a unified system but one that is easily and linearly scalable. Thus we can see that in some ways we have come full circle, back to the integration of multiple modules and applications on a single platform, but now considerably simplified and with much greater performance. Another example is less process steps: Processing receivables app in SAP GUI vs. FIORI/Simple Finance: Number of screen changes 8 --> 2 (4x)
  • 80. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 80Public Revising our Assumptions about System Design Things that are now much easier with HANA Before we go on its worth doing a quick checkpoint. We’ve explained how HANA was specifically designed to solve major problems with existing systems. Some of what we’ve said about HANA is counter intuitive, because it goes against twenty or more years of experience with computers. There are certain assumptions we make about systems design because those assumptions are based on long experience. But here we must not just learn about HANA but un-learn many of those assumptions because they are no longer true.If we have a requirement that needs high speed transactional update against a data store that we want to simultaneously do complex analysis against then we can – in the past we’d have to separate these two pieces of the requirements into different parts of the system, and OLTP updating system and a complex query system – and then implement logic to connect them, keep them in sync etc. That is no longer true, HANA allows us to combine both on true single copy of the data. Thus we can do BI queries against the operational data, either a custom application we’ve written, or an SAP application such as the Business Suite. We can develop using a simple data model that consists of pretty much the logical business data model, rather than having to embellish it with lots of added aggregations and indexes. This makes for much faster development and much easier modification. We can implement more requirements and keep the system up to date with changed requirements – more requirements met equals more benefit. Large scale math can be used, often ‘interactively’. Previously we’d shied away from this because response times would be too long. But now we can insert heavy duty math into what look like interactive OLTP transactions and get superior results from more sophisticated analysis. Because of the ease of change we can commit to keeping the application up to date more easily, supporting a faster pace of change, for example modifications to analytic rule-bases or constantly changing reports. We can check out our data, even at production volumes in seconds. This allows a much faster pace of development, and we often don’t have to have separate phases of the projects one for functional development on a subset of the data and then others for full volume testing. Likewise we can most likely do away with most batch processes and instead streamline our processes so that they become interactive, thus enhancing the productivity and responsiveness of those who work with the system, both developers and users. So this is just an aide-memoire of things to reconsider, things that used to be complex and scary but which now are relatively straightforward. We’ll explore some of these in more detail now.
  • 81. © 2016 SAP SE or an SAP affiliate company. All rights reserved. 81Public Speed allows elimination of aggregates, indexes this alone can result in a 90%+ reduction in complexity Thus, importantly, one of the ways in which we choose to spend this speed increase is not in simply making things faster. Rather, we take that speed and use it eliminate complexity. When we can do our processing so fast you no longer need indexes and aggregates and all the complexity that comes with them. In fact, for most types of processing we can not only eliminate all this extra complexity and baggage from our systems but still end up with a system that is much faster than our old ones! This is the goal of HANA, not simply making things faster, but radically simplifying everything we do, this represents a fundamentally new way of doing things, a step change in information systems, just like the mainframe was, Client / Server, the PC, the Internet. Remember, what we are aiming to do, is to improve productivity, agility and radically reduce cost of ownership. When we look at the above we can see lots of reasons for this. Users get instant or near instant results, even for complex processing. Developers can deliver new or changed function to them sooner so they can accrue more benefits, they can do this since the whole system is much simpler to develop in – there is less to develop and what is left is simpler to develop with. If we need to respond to new opportunities and threats we can do much more simply too, for the same reasons, simpler systems are more agile to work with. Obviously we have radically simplified the IT landscape so we save costs on hardware, data centre costs and administration. Aside from being on commodity Intel systems we’ll expect to need smaller systems and fewer of them. Note that they are only smaller in number and size, their compute power may be many times that of the original systems.