Scale-Up or Scale-Out?
Which is better?
What to do after Go-Live?
How to improve /fine-tune performance?
Can you share some results of such an optimisation exercise?
2. BW on HANA : Scale-Up Versus Scale-Out
Configuration Scale-Out Configuration for BW
on HANA
Scale-Up Configuration for
BW on HANA
Capacity Ramp-Up, Flexibility, Scalability
Can be Ramped-Up easily to almost
200 TBs and this configuration allows
maximum flexibility and scalability. Limited scalability.
Capacity Ramp-Down Available No Provision.
SAP Certification for Ramp-Up from
1TB to > $4 TBs and beyond Already Available Not Certain. At mercy of SAP.
Business Downtime during Capacity
Ramp-Up Zero
Equal to Migration Time ~ >
18hours
Costs associated with Capacity Ramp-
Up No Project Fees
Has to be done as a Migration
Project
Installed based >95% of customers < 5% of customers
Largest Production Instance Size > 60 TB <4TB
Architecture, Operations and Support
Multi-node System(s) with more
parts. Calls for little extra
management effort as compared to
Scale-Up. Allows better flexibility.
Single Node. Calls for less
management and support effort.
But incase of failure of Single
Node the only fall back is DR.
Table Re-distribution
May call for table(s)- re-distribution
once or twice a year and this can be
done during planned maintenance.
Little to no need as everything is
stored on Single Node.
Reliability
Higher reliability and a spare node
can be provisioned in case 1 node
goes down.
Low Reliability . If the Single Node
goes down , the entire system
fails.
3. BW on HANA: Performance Optimisation
I am prepared to share in brief some items that certainly result in performance optimization
• Data Storage Object (DSO) activation is a critical step in process of transferring data from source
systems to the business warehouse. We has achieved activation times that are 54 times faster
than the previous process.
• Faster DSO activation makes data available more quickly, supports more-frequent loading of
data, and more-frequent updates for availability closer to real-time.
• It speeds the flow of data from source systems
• Conversion of In-memory objects where by the extended star schema will be slimmed and
dimensions will be realigned and provide scope for Re-architecting some data flows without
disruption.
• Suggest Data Volume Optimization strategy to maintain system clean and green
• In a tool-supported post-migration step InfoCubes can be selected and converted to SAP HANA-
optimized objects
4. Housekeeping is the single most significant
contributorSAP HANA Data Volume Management Tasks (Sample Template)
Priority Action Description Deadline Startegy
High
Define retention times for PSA records individually for all
DataSources and delete outdated data.
After BW on HANA Live NA
High Schedule periodic batch job for deletion of outdated entries from table ODQDATA_F After BW on HANA Live
High Enable the non-active data concept as suggested in SAP Note 1767880 After BW on HANA Live
High Archive / move to NLS old/unused data from the DSOs and InfoCube After BW on HANA Live
High
End to End Review with Philips Team and recommendations
regarding the top BW Schema Tables
After BW on HANA Live
High Frequent Review and Check on HANA DB parameter settings After BW on HANA Live
High Check whether power save mode is active. For more information, see SAP Note 1890444 After BW on HANA Live
High Weekly Check for HANA DB trace settings After BW on HANA Live
High Review and Propose a Best Practice for backup procedure Ongoing
High
At least a Weekly Review and Monitoring on the recommendations for the
alerts generated in the HANA DB system
After BW on HANA Live
High
Use report RSDRI_RECONVERT_DATASTORE to convert HANA- optimized DSOs back to classic objects.
Since from BW 7.3 SP5 Standard DSO’s will support on HANA Schema Algorithm
After BW on HANA Live
Medium
Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization
and adequate performance.
After BW on HANA Live
Medium
Propose and Suggest for re-partitioning tables that are expected to grow.
HP recommend that you re-partition tables before inserting mass data or while they are still small.
After BW on HANA Live
Medium
Review, test and implement SAP basis and memory management parameter recommendations to
Avoid OOM (Out of Memory )Issues
After BW on HANA Live
5. Optimization 1 : Recommendations to reduce data footprint on HANA Database : As a thumb rule, at least 45-50% of the
SAP HANA memory should be reserved for the SQL computations, SAP HANA services, Analytics and other OS related
services. The rest can be occupied by the actual data in different column and row stores. Frequent check for BW on HANA
system on the memory occupied with data and the rest is left for computations. This will adversely affect the performance as
it will increase the number of unloads for different tables from memory to disk which further deteriorates the performance.
This obviously will lead to high memory peaks in SAP HANA. In keeping the monitoring we can always keep BW on HANA
system in line with Best Practices.
Optimization 2 : Provider proposes frequent analysis of HANA database configuration, review of HANA DB parameters,
CPU frequency settings and trace settings. Analyzing the objects with high number of records for table partitioning should be
considered if these tables are expected to grow rapidly in the future.
Optimization 3: To reduce the data footprint in the HANA database, review and implement the recommendations:
• Keep Track of the size of the top may be 30 PSA tables and assess the Data Retention Policies .
• Cold/Warm data can be unloaded to Disk.
• Define retention times for PSA records individually for all DataSources and delete outdated data. Start with the largest
PSA tables
BW on HANA project: additional questions
6. Optimization 4 : Delete the outdated entries from table ODQDATA_F be scheduling the periodic batch job
ODQ_CLEANUP as suggested in SAP Note 1836773.
Recommendation: Table ODQDATA_F is part of the operational delta queue. Refer SAP Note 1836773 (How to delete
outdated entries from delta queues - SAP Data Services) and delete the outdated entries from this table using the batch job
called ODQ_CLEANUP.
Once a day a cleanup process removes all outdated entries from the delta queues so they do not fill up. This is a regular
batch job and can be maintained as such. With the ODQMON transaction the job and the retention interval can be
configured.
In transaction ODQMON, choose menu Goto -> Reorganize delta queues,
Schedule a job for reorganization, e.g. ODQ_CLEANUP_CLIENT_004
By default the job is scheduled each day at 01:23:45 system time
If needed adapt the start time and frequency in transaction SM37
If needed adapt the retention time for recovery (see F1 help for details)
Optimization 5 : Enable the non-active data concept for BW on SAP HANA DB, review and implement the code corrections
contained in SAP Note 1767880 - Non-active data concept for BW on SAP HANA DB.
After implementing the code corrections, follow the manual steps to ensure that the unload priorities of all tables are set
correctly.
This would ensure that Persistent Staging Area tables, change log tables, and write-optimized DataStore objects are flagged
as EARLY UNLOAD by default. This means that these objects are displaced from the memory before other BW objects
(such as InfoCubes and standard DataStore objects).
BW on HANA project: additional questions
7. Optimization 6: Understand and review the CPU type, CPU clock frequency, and the hosts. If the CPU clock frequency is
set too low, this has a negative impact on the overall performance of the SAP HANA system. Usually the CPU clock
frequency should be above 2000 MHz.
Optimization 7: If an inappropriate trace level is set for SAP HANA database components, a high amount of trace
information may be generated during routine operation. This can impair system performance and lead to unnecessary
consumption of disk space.
Recommendation: For production usage of your SAP HANA database, we recommend setting the trace level of all
components according to the recommendations in the table above.
Background: Traces can be switched in the 'Trace Configuration' tab of the SAP HANA studio Administration Console
Optimization 8: Largest Non-partitioned Column Tables: There are objects with high number of records (more than 300
million). This is not yet critical with regard to the technical limit of SAP HANA (2 billion records), but table partitioning should
be considered if these tables are expected to grow rapidly in the future.
Recommendation: Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization and
adequate performance. We recommend that you partition tables before inserting mass data or while they are still small.
BW on HANA project: additional questions
8. Optimization 9 : Largest Partitioned Column Tables (Records : Consider re-partitioning tables that are expected to grow.
We also need to look at re- partition tables before inserting mass data or while they are still small.
For more information see SAP Note 1650394 or refer to the SAP HANA Administration Guide
Optimization 10 : Largest Column Tables in terms of delta size The separation into main and delta storage allows high
compression and high write performance at the same time. Write operations are performed on the delta store and changes
are transferred from the delta store to the main store asynchronously during delta merge. The column store automatically
performs a delta merge according to several technical limits that are defined by parameters. If applications require more
direct control over the merge process, the smart merge function can be used for certain tables (for example, BW prevents
delta merges during data loading for performance reasons
Optimization 11 : Memory Utilization Details for HANA Services
The following table shows the memory usage of the SAP HANA engines (services) and is only a snapshot of the time of the
download collection.
Different aspects of the memory consumption of the HANA database are highlighted: "Physical Memory Used by Services"
corresponds to the "Database Resident Size" in the SAP HANA studio and can be compared with the resident size of the
service in the operating system. The sum of "Heap Memory Used Size" and "Shared Allocated Size" roughly corresponds to
the memory usage of the SAP HANA database, which is shown in the SAP HANA studio as "Database Memory Usage".
The difference between the "Database Memory Usage" and the "Resident Database Memory" can usually be explained by
the "Allocated Heap Size".
BW on HANA project: additional questions
9. Optimization 12: Reducing Table Sizes All tables located in the row store are loaded into the main memory when the
database is started. Furthermore, row store tables cannot be compressed as much as tables located in the column store.
Therefore, we need to keep the row store as small as possible.
RSDDSTAT* data BW statistical data saved in the RSDDSTAT* tables are located in the row store. Since new data is
continuously loaded into the Business Warehouse (BW), the amount of statistical data is always increasing. Therefore, it is
essential to keep the statistical tables as small as possible, which also provide information about the performance of your
queries.
Recommendation: Reduce the number of records saved in the RSDDSTAT* tables. Consider the following:
When you maintain the settings for the query statistics, deactivating the statistics is the same as activating the statistics
internally with detail
The settings on the "InfoProvider" tab page affect the collection of statistical data for queries, as well as the settings on the
"Query" tab page (transaction RSDDSTAT). For Web templates, workbooks, and InfoProviders, you can decide between
activating or deactivating the statistics only. If you did not maintain settings for the individual objects, the default setting for
the object is used. If you did not change the default settings, the statistics
are activated.
You can delete statistics data using report RSDDSTAT_DATA_DELETE or using the corresponding graphical interface
accessible via transaction RSDDSTAT.
BW on HANA project: additional questions
10. Optimization 13 : Conversion of InfoCubes and DataStore Objects After an upgrade to SAP NetWeaver BW 7.30 SP5 or
later with SAP HANA, all DataStore objects and InfoCubes remain unchanged. In a tool-supported post processing step
(transaction RSMIGRHANADB or report RSDRI_CONVERT_CUBE_TO_INMEMORY), DataStore objects and InfoCubes
can be selected and converted to SAP HANA-optimized objects.
All InfoCubes should be converted to fully benefit of the advantages provided by SAP BW powered by HANA DB. On the
other hand, we do not recommend converting DataStore Objects as the advantages of the converted objects can be
achieved without modifying the DSOs.
Optimization 14 : SAP HANA-optimized DataStore Objects
Background: All advantages of HANA-optimized DataStore Objects are now available for standard DSOs too, which renders
conversion unnecessary. While HANA-optimized DSOs will still be supported in the future, we do not recommend converting
DSOs but, rather, reconverting any existing HANA-optimized DSOs back to classic objects.Starting with BW 7.30 SP10 (BW
7.31 SP09, BW 7.40 SP04), converting classic DSOs to HANA-optimized DSOs will not be possible anymore.
SAP HANA-optimized DataStore objects cannot be included in an SAP BW 3.x data flow. If you want to optimize a
DataStore object that is part of a 3.x data flow, you first have to migrate the actual data flow.
Furthermore, an SAP HANA-optimized DataStore object cannot be populated directly with real-time data acquisition (RDA).
The 'unique records' property does not provide any performance gain. In addition, the
uniqueness check is not performed at all in BW; instead, the uniqueness is checked by an SQL statement (DBMS exit).
Never Generate SIDs: SIDs are never generated. This option is useful for all DSO that are used (only) for further processing
in other DSOs or InfoCubes as it is not possible to run a query directly on this kind of DSO.
BW on HANA project: additional questions
11. Optimization 15 : SAP HANA-optimized InfoCubes With SAP HANA-optimized InfoCubes, the star schema is transferred to
a flat structure, which means that the dimension tables are no longer physically available. Since no dimension IDs have to
be created for the SAP HANA-optimized InfoCubes, the loading process is accelerated. The accelerated insertion of data in
SAP HANA- optimized InfoCubes means that the data is available for reporting earlier.
Optimization 16 : Analytic Indexes - Analytic indexes can be created in the APD (transaction RSANWB) or they can be an
SAP HANA model published in the SAP BW system. If you want to use SAP BW OLAP functions to report on SAP HANA
analytic or calculation views, you can publish these SAP HANA models to the SAP BW system (transaction
RSDD_HM_PUBLISH).
Optimization 17 : MultiProvider Queries
For MultiProvider queries based on SAP HANA, the standard setting for the “operations in BWA” query property (transaction
RSRT) is “3 Standard”. However, if the MultiProvider consists of a mixed landscape (there are SAP HANA-optimized as well
as non-converted InfoProviders underneath), performance problems might occur.
Recommendation: If you are running queries on top of MultiProvider containing SAP HANA-optimized InfoProviders, as well
as standard InfoProviders, either: Convert all InfoProviders to SAP HANA-optimized or
Always we need set it to S-standard and Mode 3
Last But not the Least :
Always remember to test and make a full system backup before implementing any changes in a productive environment.
BW on HANA project: additional questions
12. Performance testing – Query
Impact of current run time- using Scale-Out BW on
HANA
4
22
40
73
2 10 7
25
0
10
20
30
40
50
60
70
80
Less than 10 s 10s to 30 s 30s to 60s More than 60s
Time(s)
Type of query
Query before & after BW on HANA
move per type
Avg Before
Avg After
3.6
2
1.6
5
0.00
2.00
4.00
Less than
10 s
Time(s)
13. Data-Load results: 'Customer' 12TB BW on HANA
PoC
Application Impacting / long
Number of
loads Improvement
A2A Long running load 1 78%
CL SCM Impacting load 1 95%
CORE 1 Long running load 1 15%
Master data Long running load 1 87%
One PI Long running load 1 91%
PDS Impacting load 1 98%
POS Impacting load 1 89%
QXP Impacting load 2 91%
SCM Dashboards Impacting load 3 81%
SMART - SRM Impacting load 3 57%
SMART - VBM Impacting load 1 88%
Long running load 2 66%
VIPP LI Impacting load 3 64%
VIPP PH Impacting load 2 45%
Long running load 4 71%