SlideShare a Scribd company logo
1 of 82
Download to read offline
www.mongodb.com
Breakfast con MongoDB
July 6th 2017
Come realizzare una soluzione di
Mainframe Offloading
• Dal Mainframe all’Operational Data Store
• Demo Live di Quantyca
• Storie di modernizzazione del mainframe
• Il futuro: PSD2 e Blockchain
• Cerved: Viaggio nellaAPI Economy
• Chiusura e Domande e Risposte
Agenda
MongoDB en cifras
800+
employees
3,000+
customers
22 office world
wide
$311M in
fundings
30M downloads
#1 NoSQL
Database
Customers MongoDB
Office
Support MongoDB User
Groups
30+ Million Downloads
Let our team help you on your journey to efficiently leverage the capabilities of MongoDB, the data platform that
allows innovators to unleash the power of software and data for giant ideas.
The largest Financial Services and, Communications and Government Organizations are working with MongoDB to
Modernize their Mainframes to Reduce Cost and Increase Resilience
Being successful with MongoDB for Mainframes
5-10xDeveloper Productivity
We help our customers to increase overall
output, e.g. in terms of engineering
productivity.
80%Mainframe Cost Reduction
We help our customers to dramatically lower
their total cost of ownership for data storage
and analytics by up to 80%.
Challenges of Mainframes in a Modern World
There are three areas of Data Management. In the legacy world these have been disconnected with
many technologies attempting to achieve an integrated the landscape.
AdaptabilityCost Risk
Unpredictable Loads
Planned/Unplanned Downtime
Expensive Ecosystem
Change Management
Access to Skills
Capacity Management
Business Process Risk
Operational Complexity
Customer Experience
Analytics Offload
Producers
Files
Orders
MDM Logistics
(…)
Mainframe
STAGING
(E)TL, e.g. Abinitio,
Talend, Informatica
Consumers / Channels
Web Mobile
B2C CMS (…)
APIs
DWH
Typical DWH Structure
Limits
• High Implementation Effort
• Transformation and standardization in a harmonic data model is the
DWH heart.
• Rigidity
• The data model is pre-defined and rigid and is difficult to expand it to
integrate additional external sources
• No Raw Data.
• Data Volume
• DWH can manage high volume of data but with dedicated databases
and optimized hardware.
Rise of Data Lakes
• Many companies started to look at a Data Lake architecture
• Platform to manage data in a flexible
• To aggregate and put in relation data cross-silos in a single repository
• Exploration of all the data
• The most common platform is Hadoop
• Allows horizontal scalability on commodity hardware
• Allows storing heterogeneous data with a read optimized model
• Include working layer in SQL and common languages
• Great references (Yahoo e Google)
Struttura del Data Lake
How is Working?
• Source data are loaded as they are in a raw data layer without
any transformation
• The Technology is not based on a RDBMS but on a file system
(HDFS di Hadoop for example)
• The queries are executed directly on the raw data and are much
more complex to write since must contains also the logic of
harmonization and consolidation of data
• The add of the source of information is quite easy
Typical High Level Architecture
Producers
Files
Orders
MDM Logistics
(…)
Mainframe
(E)TL, e.g. Abinitio,
Talend, Informatica
Consumers / Channels
Web Mobile
B2C CMS (…)
APIs
DATA LAKE
And for non analytical queries?
• Data Lakes are designed to provide the Hadoop output to online
applications. These applications can have requirements like:
• Response time in ms
• Random Access on small indexed subset of data
• Expressive queries
• Frequent real-time updates
Reference Architecture / 3
Producers
Files
Orders
MDM Logistics
(…)
Legacy DBs
(E)TL, e.g. Abinitio,
Talend, Informatica
Consumers / Channels
Web Mobile
B2C CMS (…)
APIs
DATA LAKE
COLUMN DB SEARCH ENGINE
Issues
• Data Model
• Is designed for Analytics not for the applications
• Not Real-Time
• ETL process, long time to traverse all the components
• SearchDB limited in query capabilities
• ColumnDB limited in data model flexibility and HA
Legacy Optimization with MongoDB
• Extending legacy applications to new channels & users
• Mobile & web, IoT sensor data integration, single view of data
• Different data access, performance and scalability requirements
• Conserve valuable MIPS, avoid resource contention
• Move from a product-centric to a customer-centric schema
Business Drivers
Solution
• Operational Data Store built on MongoDB
Banks Situation: The Past
• Banks used mainframe systems to store critical data to that was
at the core of their business. Account data, transaction data,
other core financial data, etc.
• Internal front office systems were used by bank employees
(“Bankers”) to access this critical data and deliver it to clients.
• Critical bank practices (data governance, backup, HA/DR,
auditing, etc.) evolved with mainframe systems as a core
element of the practices and solutions. Mainframes were the
golden source of data before we invented the term “golden
source” .
The Past
Interactions
Front office applications
Teller systems
Advisor tools
Get account info
Get investment Info
Risk
Main frame
infrastructureClient Banker
Data requests
Manage client profile
ATM
Get/update account balanceDirect access
Present
Mobile app functionality
Get account summary for Ryan
Get client summary for Ryan
Mainframe.
Mobile
Get last 10 mobile touchpoints for Ryan
Other data
sources
Load
home
screen
Web
Load
home
screen
Logic (service calls,
orchestration,
aggregation, data
transformation)
Web app functionality
Get account summary for Ryan
Get client summary for Ryan
Get last 7 days web activity for Ryan
Logic (service calls,
orchestration,
aggregation, data
transformation)
Teller / POS app functionality
Get account summary for Ryan
Get client summary for Ryan
Get some bank confidential data
Reference
data
Specialized
integration
point
Account services
Client data services
Access log
(filter = mobile)
Confidential data
access
DATA
Access log
filter = web)
Account services
Client data services
Account services
Client data services
These operations
are typically read
on demand with
caching et. al..
Client Banker
Use Case Montly UK Payday
Problem Why MongoDB Results
Situation Technology Changes Cost
Almost everyone gets paid on the same
day here in London (not like in NA), once
a month at the end of the calendar month.
Almost everyone in London uses their
bank mobile app to check their balance on
the same day, once a month.
People used to have to go to the bank to
get your balance but now everyone has a
phone on their app they can use to check
their balance.
The fact that it’s so easy to check bank
balance combined with the significant time
between paydays means that most people
check their balance on payday at least
once if not multiple times.
The mainframe infrastructure to support
this spike in traffic due to this use case
costs banks millions dollars a year.
An abstract, logical view
If we strip away some details to investigate a more fundamental,
abstract, logical view of the current state, it looks like this.
LOGIC
Function app + data
logic
Other
services
Need data
Thing app + data
logic
Need data
App app + data
logic
Need data
Other app + data
logic
Need data
MAIN FRAME
SVCS + INFRA
MAIN
FRAME
DATA
Other data
sources
Reference
data
Some simple changes
moving the app/data logic out of clients and replacing it with data
logic between the legacy DB and the MongoDB ODS, we can
deliver significant value to reduce complexity and redundancy.
write
DATA
LOGIC
Function
rules Other
services
Use data
Analysi
s
transform
Use data
Other
Use data
Other data
sources
MongoDB
ODS
services
audit
LEGACY
DBS
LEGACY
DATA
App
Use data
Thing Use data
schema
orchestrate
plumbing
Reference
data
rea
d
Batch
Near real
time
Real
time
Why MongoDB
• Data Model Optimized
• The data are transformed during the replication phase to a customer-
centric data model, instead of a product-centric one, this is called
”Single View”
• Flexibility
• The flexible document-based model remove the need to update the
schema to store new info
• Architecture for Modern Applications
• MongoDB has a scalable, geographically distributed architecture for
always on services.
• Integration with Analytical Systems
Benefits
• Reduce Development Time
• The customer-centric data model and the MongoDB flexibility and
document data representation reduce by more than 50% the
development time
• Improve the SLA
• MongoDB scalable always on architecture improves the uptime and the
performance, providing a better customer experience
• Reduce Costs
• Move MIPS from legacy to commodity low cost hardware
Consumers / ChannelsProducers
Reference Architecture
Files
Mainframe
Additional
data sources
& systems
Files
Web Mobile
B2C CMS (…)
APIs
API (Authorisation, Authentication, Logging, etc.)
(E)TL, e.g. Abinitio,
Talend, Informatica,
Custom
CDC Process
(Change Data Capture)
(Attunity)
Message Queues, e.g.
Kafka, ActiveMQ,
RabbitMQ
Orders
MDM Logistics
(…)
Optional
Operational
Data Store
UniversalReal-TimeSimpler
Attunity Replicate
Universal Platform for Data Replication
#1 Independent Provider of Streaming CDC
Architecture
Transfer
TransformFilter
Batch
CDC Incremental
In-Memory
File Channel
Batch
Hadoop
Files
RDBMS
Data
Warehouse
Mainframe
Persistent Store
Hadoop
Files
RDBMS
Data
Warehouse
Mainframe
Attunity Replicate
Rapidly Move Data Across Complex Hybrid Environments
TargetsSources
On	Premises
Cloud	Platform
HadoopRDBMS
Data
Warehouse
Hadoop
RDBMS
Data
Warehouse
WAN-Optimized Data
Transfer
Compression | Multi-pathing
Encryption
Attunity Replicate
Go Agile with Automated Processes
• Target schema creation
• Heterogeneous data type
mapping
• Batch to CDC transition
• DDL change propagation
• Filtering
• Transformations
Hadoop
Fil
es
RDBM
S
Mainfram
e
Hadoop
Files
RDBMS
Kafka
EDW EDW
Attunity Replicate
Zero Footprint Architecture
• CDC identifies source updates by scanning change logs
• No software agents required on sources or targets
• Minimizes administrative overhead
Low Infrastructure Impact
• Log based CDC
• Source specific optimization
• Uses native DB clients for
security
Hadoop
Fil
es
RDBM
S
Mainfram
e
Hadoop
Files
RDBMS
Kafka
EDW EDW
Attunity Replicate
Streaming CDC to Apache Kafka
CDC
MSG
n 2 1
MSG MSG
Data Streaming
Transaction
logs
In Memory Optimized Metadata
Management and Data Transport
JSON/
Avro
data
format
Bulk
Load
MSG
n 2 1
MSG MSG
Data Streaming
JSON/
Avro
Schema
format
Message
Topic
Batch data
Schema
Topic
Real-Time Data Flow
Attunity Replicate
User Interface
• Intuitive web-based GUI
• Drag and drop, wizard-assisted configuration steps
• Consistent process for all sources and targets
Guided User Experience
Attunity Enterprise Manager (AEM)
Manage Data Ingest and Replication At Scale
• Centralize design and control
§ Loading, mapping, DDL changes
§ Stop, start, resume, reload
§ Automated status discovery
• Manage security controls
§ Granular access controls
§ Full audit trail
• Customize your views
§ Group, search, filter, sort and drill
down on tasks
§ Respond to real-time alertsLeverage graphical dashboard and APIs (REST and .Net)
Demo Scenario
35
Demo! The scenario
• 3 Tables:
• Customer
• Account
• Operations (transactions)
• Operations change the balance of an account in the same
transaction
• The same transactional integrity has to be replicated to the
MongoDB data store
Consumers / ChannelsProducers
Reference Architecture with a twist
Files
Mainframe
Additional
data sources
& systems
Files
Web Mobile
B2C CMS (…)
APIs
API (Authorisation, Authentication, Logging, etc.)
(E)TL, e.g. Abinitio,
Talend, Informatica,
Custom
CDC Process
(Change Data Capture)
(Attunity)
Message Queues, e.g.
Kafka, ActiveMQ,
RabbitMQ
Orders
MDM Logistics
(…)
Optional
Operational
Data Store
What is Kakfa?
• Kafka is a distributed publish-subscribe messaging system
organized in topics.
• It’s designed to be
• Fast
• Scalable
• Durable
• When used in the right way and for the right use case, Kafka
has unique attributes that make it a highly attractive option for
data integration.
First approach
CDC Process
(Change Data Capture) C A O A O A O …
Transaction
• All DBMS transactions are in the same topic and the order is guaranteed
• Kafka persists the data sequentially in MongoDB
• No other tools are needed, all processing and persistence is managed
inside Kafka and MongoDB
• Higher parallelism can be achieved by partitioning in Kafka by Customer
ID
Second approach – Distributed Transactions
A A A A A A A A
CDC Process
O O O O O O O O
Public.Account
Public.Transaction
CDC Process
A A A A A A A A
O O O O O O O O
Private.Account
Private.Transaction
Change
Message
Key
Join
AO AO AO AO AO AO AO AO
Live Demo
41
Case Studies
42
5 phases of Legacy Offloading
MongoDB can help you offload MIPS from the legacy systems, save double-digits in cost and
increase agility and capabilities for new use cases at the same time.
Scope
BusinessBenefits
Transactions are written first to MongoDB, which passes the
data on to the legacy system of record.
Writes are performed concurrently to the legacy as well as
MongoDB (Y-Loading), e.g. via a service-driven
architecture.
The Operational Data Layer (ODL) data is enriched with
additional sources to serve as operational intelligence
platform for insights and analytics.
Enriched ODL
Records are copied via CDC/Delta Load mechanism from
the legacyDB into MongoDB, which serves as Operational
Data Layer (ODL), e.g. for frequent reads.
Operational
Data Layer (ODL)
“MongoDB first”
“Y-Loading”
System of Record
MongoDB serves as system of record for a multitude of
applications, with deferred writes to the legacy if necessary.
Offloading
Reads
Transforming the role
of the mainframe
Offloading
Reads & Writes
Offloading Reads
Initial use cases primarily focus on offloading costly reads, e.g. for querying large numbers of
transactions for analytics or historical views across customer data.
Application Application
Mainframe Mainframe
Operational Data Layer (ODL)
Using a change data capture (CDC) or delta load mechanism
you create an operational data layer alongside the mainframe
that serves read-heavy operations.
Additional
data sources
Files
Enriched Operational Data Layer (ODL)
Additional data sourced are loaded into the ODS to create an
even richer picture of your existing data and enable additional
use cases like advanced analytics.
Writes
Reads Reads
Writes
100%
10-50%50-90%
Writes
Reads
100%
25-75%25-75%
Writes
Reads
Offloading Reads & Writes
By introducing a smarter architecture to orchestrate writes concurrently, e.g. via a Microservices
architecture, you can shift away from delayed CDC or delta load mechanisms.
Mainframe
Additional
data sources
Files
Reads
Y-Loading
Writing (some) data concurrently into the mainframe
as well as MongoDB enables you to further limit
interactions with the mainframe technology .
It also sets you up for a more transformational shift of
the role of the mainframe with regards to your
enterprise architecture.
Application
10-25%75-90%
40-80%20-60%
Writes
Reads
Microservices / API Layer
Writes
Transforming the role of the legacy
With a shift towards writing to MongoDB first before writing to the mainframe (if at all) you are further
changing the meaning of “system of record” and “mainframe” within the organisation.
Mainframe
Additional
data sources
Files
System of Record
MongoDB serves as main System of Record, with writes
optionally being passed on to the mainframe for legacy
applications only or it gets decommissioned entirely.
Mainframe
Additional
data sources
Files
“MongoDB first”
Transactions first write to MongoDB, which can serve as buffer
before it passes transactions to the mainframe as System of
Record.
Writes Processing
20-50%50-80%
60-90%10-40%
Writes
Reads
50-90%10-50%
90-100%0-10%
Writes
Reads
Application
Microservices / API Layer
Reads
Writes
Application
Microservices / API Layer
Reads
Writes
Mainframe Offloading enables insight
through Single View of Customer
Spanish bank replaces Teradata and Microstrategy to
increase business insight and avoid significant cost
Problem Why MongoDB Results
Problem Solution Results
Branches required an application that
offered all information about a given
customer and all his contracts (accounts,
loans, cards, etc.).
Multi-minute latency for accessing
customer data stored in Teradata and
Microstrategy.
In addition, accessing data frequently from
the legacy systems would cause spikes in
MIPS and related cost.
Offloaded to MongoDB where data is
highly-available and can be accessed by
new applications and channels.
Built single view of customer on top of
MongoDB – flexible and scalable app,
easy to adapt to new business needs.
Super fast, ad hoc query capabilities
(milliseconds), and real-time analytics
thanks to MongoDB’s Aggregation
Framework.
Can now leverage distributed
infrastructure and commodity hardware for
lower total cost of ownership and greater
availability.
Cost avoidance of 10M$+
Application developed and deployed in
less than 6 months. New business policies
easily deployed and executed, bringing
new revenue to the company.
Current capacity allows branches to load
instantly all customer info in milliseconds,
providing a great customer experience.
New applications and services can be
built on the same data platform without
causing MIPS/cost or increasing risk by
putting more stress on legacy systems.
Problem Why MongoDB ResultsProblem Solution Results
High licensing costs from proprietary
database and data grid technologies
Data duplication across systems with
complex reconciliation controls
High operational complexity impacting
service availability and speed of
application delivery
Implemented a multi-tenant PaaS with
shared data service based on
MongoDB, accessed via a common API
with message routing via Kafka
Standardized data structures for storage
and communication based on JSON
format
Multi-sharded, cross-data center
deployment for scalability and
availability
$ millions in savings after migration from
Coherence, Oracle database and
Microsoft SQL Server
Develop new apps in days vs months
100% uptime with simplified platform
architecture, higher utilization and
reduced data center footprint
Database-as-a-Service
Migration from Oracle & Microsoft to create a consolidated
“data fabric” reduces $m in cost, speeds application
development & simplifies operations
During their recent FY 2016
Investor Report, RBS CEO Ross
McEwan highlighted their MongoDB
Data Fabric platform as a key
enabler to helping the Bank reduce
cost significantly and dramatically
increase the speed at which RBS
can deploy new capabilities.
“Data Fabric will help reduce cost
significantly and dramatically
increase the speed at which we can
deploy new capabilities for our
customers”
-Ross McEwan, CEO RBS
RBS’s Investor Report FY’16
Future Challenges
51
Future Challenges
• Open Banking and Open Data
• Provides API for 3rd party applications
• Unpredictable inbound traffic
• Requires an elastic infrastructure
• Provisioning time very high
• Blockchain
• Will have a huge impact to manage transactions
CloudProducers
Future Architecture
Files
Mainframe
Additional
data sources
& systems
Files
Consumers / Channels
Web
Mobil
e
B2C CMS (…)
APIs
INTERNAL API
Consumer (Read,
Transform, Apply)
Attunity CDC
Kafka
Orders
MDM Logistics
(…)
Optional
On Premise
Operational
Data Store
Optional: Write-back
mechanism
Consumer (Read,
Transform, Apply)
OPEN API
Third Party Payment
Web Mobile
Case Studies
54
Stefano Gatti – Head of Innovation and Data Sources
06 Luglio 2017
Cerved Api
Il viaggio dal progetto alla piattaforma
56
Cerved e i suoi “dati”
Il dato «liquido» e l’algoritmo «solido»
Benvenuti nella Api Economy ma ...
Cerved Api Ecosystem
Le prossime sfide
Indice
Cerved e i suoi dati
58
Aree Business & Numeri
CREDIT INFORMATION
Tutelarsi dal rischio di credito
MARKETING SOLUTIONS
Crescere con nuove opportunità di business
CREDIT MANAGEMENT
Gestire e recuperare i crediti in sofferenza
1000 report/min
ü Documenti
40 milioni
ü Linee di codice SW
34,000
üClienti
60 milioni
üDati di Pagamenti
2,000
ü Persone
377 milioni Euro
(2016)
üRicavi
59
La nostra V più importante: “Variety”
Web Data
Open Data
Dati proprietari
Dato ufficiale non
camerale
Dato ufficiale
camerale
A
c
c
u
r
a
c
y
C
o
m
p
l
e
s
s
i
t
à
60
L’infrastruttura di erogazione
Sourcing
Business Rules
Prodotti
Erogazione
Operations
Piattaforma
1 PB Byte dati
3000 business-rules 600 milioni
di eventi dati di
monitoraggio
all’anno
350 operatori
su sw interno
50 siti web
di erogazione
> 200 progetti B2B
> 500 prodotti
80% evasioni
time-critical
1350
server
18 dei primi 30
database più diffusi
in produzione
61
Il contesto API e i suoi numeri
Servizi/Report SOAP:
oltre 1000
Microservices: 3.000
Ricerche Anagrafiche:
110.000
Calcoli Rating
300.000
Chiamate a Servizi
10.000.000
Intra Farm: 5.000.000
Legacy:
2.500.000
SOAP:
1.500.000
REST:
1.000.000
Eventi su Dati
4.500.000
Operazioni su Storage
documentale
6.500.000
Blocchi di Informazione
Erogati:
5.000.000
UN GIORNO
IN CERVED:
Il dato “liquido” e l’algoritmo “solido”
63
La Big Data Economy
Il dato liquido: pervade le aziende in crescita …
64
L’arrivo della Algorithmic Economy?
Gli algoritmi danno solidità e usabilità al mondo «big data»
65
Big data & algorithms
Per prendere decisioni migliori …
«Co-fondatore del MIT
Media Lab, pioniere della
human-machine interaction
e fra i data scientist più
importanti del mondo»
Sandy Pentland
Fonte: http://www.betterdecisions.it
66
API Economy trend
Uno strumento per rendere più agile la pervasività di dati e algoritmi
20162012
Trend tecnologico Trend business
Benvenuti nella Api Economy ma …
Approccio API Economy Data & Algorithmic-driven
Overview
OFFERTA	
Cerved	API	
Diversi	tipi	di	“Consumer” BUSINESS
(es:%Sales%&%Marketing,%Finance%
&%Operations,%Strategy and%
Development,%…)
IT
(es:		Banche,	Istituzioni	
Finanziarie,	Grandi	Imprese,	
Agenzie	Web,	Blogger	di	
News	Economiche,	System	
Integrator,	Sviluppatori	di	
Software	Gestionale,	PMI,	…)
BUSINESS	LINE
IT
API	come	“building	block”	secondo	un	modello	“a	store”
API Economy e la vision Cerved
Algoritmi
Dati
Algoritmi	
personalizzati
Algoritmi
DatiDati	esterni
Dati
Clienti	– Partner	– Terze	Parti
Cerved	platformLivello	
Integrazione
Dati
Start-up,	Developers
Componenti	innovativi
Open	innovation,	hackathonConnettori
Applicazioni
API Portal + API Gateway raw data
Algoritmi	
personalizzati
Algoritmi
Dati	esterni
Dati
Connettori
Applicazioni
Connettori
Applicazioni
API non solo come tecnologia ma come prodotto
Ongoing lesson learned …
• Bisogni del Cliente
• Usabilità sviluppatori
• Design
• Manutenzione e supporto
• Un business agile costruito
su una piattaforma di APIs
Focus su
Cerved API Ecosystem
Cerved API ArchitectureApigeeCerved
Api Dev
Portal
CAS
login
IMS
Check
User
check
user
Apigee
Enterprise
Cerved API
EndPoint
DB Rel
(Oracle)
BR Static
BR dynamic
Api
request
• SLA
• Carico sul DB Rel
• Prestazioni BR
Problemi
Cerved API Architecture (hybrid cloud)ApigeeCerved
Api Dev
Portal
CAS
login
IMS
Check
User
check
user
Apigee
Enterprise
DB Rel
(Oracle)
Graph
DB
(Neo4J)
Doc. DB
(Mongo
DB)
BR Static
BR dynamic
Api
request
Cloud
DB
Cloud
Cerved API
EndPoint
API Cloud
74
Cerved&MongoDb: presente e futuro
OLTP Cerved
(Oracle, Teradata)
Business
LogicCerved
API
Marketplace
Sistem Integrators
Partners Esterni
Loader
Loader
MongoDb
Tools Data
Science
(ML, BI, DL, …)
Data Lake
(Cloudera)
MongoDb
Caching
Layer
Generator
e EventiMongoDb
Caching
Layer
Cloud
(AWS, Google, …)
Cerved developer portal
Cerved developer portal
API di ricerca nell’intero tessuto economico italiano
Per integrare e cercare
in qualunque soluzione
o caso d’uso:
• Tutte le aziende italiane
• La parte più importante
delle attività economiche
non iscritte
• Le persone del tessuto
economico italiano
Con i principali dettagli
anagrafici
Cerved developer portal
API che fornisce lo score su tutte le aziende italiane
Credit scoring calcolato event-driven
su tutte le aziende italiane con
grading su diverse dimensioni
dell’azienda
Cerved developer portal
The Italian Business Graph Apis
Antifrode
Procurement
Business Investigatio
Business Scouting
Data Journalism
Cerved developer portal
Area Lab: powering innovation
Le prossime sfide
Le prossime sfide
ü Integrare al meglio il business nel ciclo
di sviluppo e manutenzione
ü Rafforzare il rapporto con i nostri Clienti
migliorandone l’esperienza
Ø Aumentare la velocità di pubblicazione
di API innovative
Ø Integrare API di partner e aziende del
gruppo
Ø Rafforzare l’ecosistema della
Grazie !

More Related Content

What's hot

Data Modeling in Looker
Data Modeling in LookerData Modeling in Looker
Data Modeling in LookerLooker
 
MongoDB at Scale
MongoDB at ScaleMongoDB at Scale
MongoDB at ScaleMongoDB
 
MongoDB Atlas
MongoDB AtlasMongoDB Atlas
MongoDB AtlasMongoDB
 
GraphFrames: Graph Queries In Spark SQL
GraphFrames: Graph Queries In Spark SQLGraphFrames: Graph Queries In Spark SQL
GraphFrames: Graph Queries In Spark SQLSpark Summit
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
 
Chicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionChicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionCloudera, Inc.
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseDatabricks
 
Mongodb basics and architecture
Mongodb basics and architectureMongodb basics and architecture
Mongodb basics and architectureBishal Khanal
 
Diving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDiving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDatabricks
 
Basics of MongoDB
Basics of MongoDB Basics of MongoDB
Basics of MongoDB Habilelabs
 
Advanced Natural Language Processing with Apache Spark NLP
Advanced Natural Language Processing with Apache Spark NLPAdvanced Natural Language Processing with Apache Spark NLP
Advanced Natural Language Processing with Apache Spark NLPDatabricks
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDBMike Dirolf
 
DI&A Slides: Data Lake vs. Data Warehouse
DI&A Slides: Data Lake vs. Data WarehouseDI&A Slides: Data Lake vs. Data Warehouse
DI&A Slides: Data Lake vs. Data WarehouseDATAVERSITY
 
An Enterprise Architect's View of MongoDB
An Enterprise Architect's View of MongoDBAn Enterprise Architect's View of MongoDB
An Enterprise Architect's View of MongoDBMongoDB
 

What's hot (20)

MongoDB
MongoDBMongoDB
MongoDB
 
Data Modeling in Looker
Data Modeling in LookerData Modeling in Looker
Data Modeling in Looker
 
MongoDB at Scale
MongoDB at ScaleMongoDB at Scale
MongoDB at Scale
 
Nosql seminar
Nosql seminarNosql seminar
Nosql seminar
 
MongoDB Atlas
MongoDB AtlasMongoDB Atlas
MongoDB Atlas
 
GraphFrames: Graph Queries In Spark SQL
GraphFrames: Graph Queries In Spark SQLGraphFrames: Graph Queries In Spark SQL
GraphFrames: Graph Queries In Spark SQL
 
Data Engineering Basics
Data Engineering BasicsData Engineering Basics
Data Engineering Basics
 
Big Data Architectural Patterns
Big Data Architectural PatternsBig Data Architectural Patterns
Big Data Architectural Patterns
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
Chicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An IntroductionChicago Data Summit: Apache HBase: An Introduction
Chicago Data Summit: Apache HBase: An Introduction
 
Free Training: How to Build a Lakehouse
Free Training: How to Build a LakehouseFree Training: How to Build a Lakehouse
Free Training: How to Build a Lakehouse
 
Mongodb basics and architecture
Mongodb basics and architectureMongodb basics and architecture
Mongodb basics and architecture
 
Diving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction LogDiving into Delta Lake: Unpacking the Transaction Log
Diving into Delta Lake: Unpacking the Transaction Log
 
Basics of MongoDB
Basics of MongoDB Basics of MongoDB
Basics of MongoDB
 
Advanced Natural Language Processing with Apache Spark NLP
Advanced Natural Language Processing with Apache Spark NLPAdvanced Natural Language Processing with Apache Spark NLP
Advanced Natural Language Processing with Apache Spark NLP
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Big data, Big decision
Big data, Big decisionBig data, Big decision
Big data, Big decision
 
DI&A Slides: Data Lake vs. Data Warehouse
DI&A Slides: Data Lake vs. Data WarehouseDI&A Slides: Data Lake vs. Data Warehouse
DI&A Slides: Data Lake vs. Data Warehouse
 
An Enterprise Architect's View of MongoDB
An Enterprise Architect's View of MongoDBAn Enterprise Architect's View of MongoDB
An Enterprise Architect's View of MongoDB
 

Similar to MongoDB Breakfast Milan - Mainframe Offloading Strategies

Overcoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBOvercoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBMongoDB
 
MongoDB in a Mainframe World
MongoDB in a Mainframe WorldMongoDB in a Mainframe World
MongoDB in a Mainframe WorldMongoDB
 
La creación de una capa operacional con MongoDB
La creación de una capa operacional con MongoDBLa creación de una capa operacional con MongoDB
La creación de una capa operacional con MongoDBMongoDB
 
When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...MongoDB
 
Creating a Modern Data Architecture for Digital Transformation
Creating a Modern Data Architecture for Digital TransformationCreating a Modern Data Architecture for Digital Transformation
Creating a Modern Data Architecture for Digital TransformationMongoDB
 
Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Group
 
Accelerating a Path to Digital With a Cloud Data Strategy
Accelerating a Path to Digital With a Cloud Data StrategyAccelerating a Path to Digital With a Cloud Data Strategy
Accelerating a Path to Digital With a Cloud Data StrategyMongoDB
 
Transform your DBMS to drive engagement innovation with Big Data
Transform your DBMS to drive engagement innovation with Big DataTransform your DBMS to drive engagement innovation with Big Data
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
 
The Double win business transformation and in-year ROI and TCO reduction
The Double win business transformation and in-year ROI and TCO reductionThe Double win business transformation and in-year ROI and TCO reduction
The Double win business transformation and in-year ROI and TCO reductionMongoDB
 
L’architettura di Classe Enterprise di Nuova Generazione
L’architettura di Classe Enterprise di Nuova GenerazioneL’architettura di Classe Enterprise di Nuova Generazione
L’architettura di Classe Enterprise di Nuova GenerazioneMongoDB
 
Choosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloudChoosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloudJames Serra
 
L’architettura di classe enterprise di nuova generazione
L’architettura di classe enterprise di nuova generazioneL’architettura di classe enterprise di nuova generazione
L’architettura di classe enterprise di nuova generazioneMongoDB
 
Unlocking Operational Intelligence from the Data Lake
Unlocking Operational Intelligence from the Data LakeUnlocking Operational Intelligence from the Data Lake
Unlocking Operational Intelligence from the Data LakeMongoDB
 
Data Treatment MongoDB
Data Treatment MongoDBData Treatment MongoDB
Data Treatment MongoDBNorberto Leite
 
Overcoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBOvercoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBMongoDB
 
Enabling Telco to Build and Run Modern Applications
Enabling Telco to Build and Run Modern Applications Enabling Telco to Build and Run Modern Applications
Enabling Telco to Build and Run Modern Applications Tugdual Grall
 
MongoDB Europe 2016 - The Rise of the Data Lake
MongoDB Europe 2016 - The Rise of the Data LakeMongoDB Europe 2016 - The Rise of the Data Lake
MongoDB Europe 2016 - The Rise of the Data LakeMongoDB
 

Similar to MongoDB Breakfast Milan - Mainframe Offloading Strategies (20)

Overcoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBOvercoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDB
 
MongoDB in a Mainframe World
MongoDB in a Mainframe WorldMongoDB in a Mainframe World
MongoDB in a Mainframe World
 
La creación de una capa operacional con MongoDB
La creación de una capa operacional con MongoDBLa creación de una capa operacional con MongoDB
La creación de una capa operacional con MongoDB
 
When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...When to Use MongoDB...and When You Should Not...
When to Use MongoDB...and When You Should Not...
 
Creating a Modern Data Architecture for Digital Transformation
Creating a Modern Data Architecture for Digital TransformationCreating a Modern Data Architecture for Digital Transformation
Creating a Modern Data Architecture for Digital Transformation
 
Skillwise Big Data part 2
Skillwise Big Data part 2Skillwise Big Data part 2
Skillwise Big Data part 2
 
Accelerating a Path to Digital With a Cloud Data Strategy
Accelerating a Path to Digital With a Cloud Data StrategyAccelerating a Path to Digital With a Cloud Data Strategy
Accelerating a Path to Digital With a Cloud Data Strategy
 
Transform your DBMS to drive engagement innovation with Big Data
Transform your DBMS to drive engagement innovation with Big DataTransform your DBMS to drive engagement innovation with Big Data
Transform your DBMS to drive engagement innovation with Big Data
 
Skilwise Big data
Skilwise Big dataSkilwise Big data
Skilwise Big data
 
The Double win business transformation and in-year ROI and TCO reduction
The Double win business transformation and in-year ROI and TCO reductionThe Double win business transformation and in-year ROI and TCO reduction
The Double win business transformation and in-year ROI and TCO reduction
 
L’architettura di Classe Enterprise di Nuova Generazione
L’architettura di Classe Enterprise di Nuova GenerazioneL’architettura di Classe Enterprise di Nuova Generazione
L’architettura di Classe Enterprise di Nuova Generazione
 
Dataweek-Talk-2014
Dataweek-Talk-2014Dataweek-Talk-2014
Dataweek-Talk-2014
 
Choosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloudChoosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloud
 
L’architettura di classe enterprise di nuova generazione
L’architettura di classe enterprise di nuova generazioneL’architettura di classe enterprise di nuova generazione
L’architettura di classe enterprise di nuova generazione
 
Unlocking Operational Intelligence from the Data Lake
Unlocking Operational Intelligence from the Data LakeUnlocking Operational Intelligence from the Data Lake
Unlocking Operational Intelligence from the Data Lake
 
Data Treatment MongoDB
Data Treatment MongoDBData Treatment MongoDB
Data Treatment MongoDB
 
Overcoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDBOvercoming Today's Data Challenges with MongoDB
Overcoming Today's Data Challenges with MongoDB
 
Enabling Telco to Build and Run Modern Applications
Enabling Telco to Build and Run Modern Applications Enabling Telco to Build and Run Modern Applications
Enabling Telco to Build and Run Modern Applications
 
MongoDB Europe 2016 - The Rise of the Data Lake
MongoDB Europe 2016 - The Rise of the Data LakeMongoDB Europe 2016 - The Rise of the Data Lake
MongoDB Europe 2016 - The Rise of the Data Lake
 
SoftServe BI/BigData Workshop in Utah
SoftServe BI/BigData Workshop in UtahSoftServe BI/BigData Workshop in Utah
SoftServe BI/BigData Workshop in Utah
 

More from MongoDB

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump StartMongoDB
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB
 

More from MongoDB (20)

MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB AtlasMongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
MongoDB SoCal 2020: Migrate Anything* to MongoDB Atlas
 
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
MongoDB SoCal 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
MongoDB SoCal 2020: Using MongoDB Services in Kubernetes: Any Platform, Devel...
 
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDBMongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
MongoDB SoCal 2020: A Complete Methodology of Data Modeling for MongoDB
 
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
MongoDB SoCal 2020: From Pharmacist to Analyst: Leveraging MongoDB for Real-T...
 
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series DataMongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
MongoDB SoCal 2020: Best Practices for Working with IoT and Time-series Data
 
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 MongoDB SoCal 2020: MongoDB Atlas Jump Start MongoDB SoCal 2020: MongoDB Atlas Jump Start
MongoDB SoCal 2020: MongoDB Atlas Jump Start
 
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
MongoDB .local San Francisco 2020: Powering the new age data demands [Infosys]
 
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
MongoDB .local San Francisco 2020: Using Client Side Encryption in MongoDB 4.2
 
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
MongoDB .local San Francisco 2020: Using MongoDB Services in Kubernetes: any ...
 
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
MongoDB .local San Francisco 2020: Go on a Data Safari with MongoDB Charts!
 
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your MindsetMongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
MongoDB .local San Francisco 2020: From SQL to NoSQL -- Changing Your Mindset
 
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas JumpstartMongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
MongoDB .local San Francisco 2020: MongoDB Atlas Jumpstart
 
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
MongoDB .local San Francisco 2020: Tips and Tricks++ for Querying and Indexin...
 
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
MongoDB .local San Francisco 2020: Aggregation Pipeline Power++
 
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
MongoDB .local San Francisco 2020: A Complete Methodology of Data Modeling fo...
 
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local San Francisco 2020: MongoDB Atlas Data Lake Technical Deep Dive
 
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & GolangMongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
MongoDB .local San Francisco 2020: Developing Alexa Skills with MongoDB & Golang
 
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
MongoDB .local Paris 2020: Realm : l'ingrédient secret pour de meilleures app...
 
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
MongoDB .local Paris 2020: Upply @MongoDB : Upply : Quand le Machine Learning...
 

Recently uploaded

Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf31events.com
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Angel Borroy López
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样umasea
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsChristian Birchler
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationBradBedford3
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Natan Silnitsky
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsSafe Software
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfDrew Moseley
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 

Recently uploaded (20)

Sending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdfSending Calendar Invites on SES and Calendarsnack.pdf
Sending Calendar Invites on SES and Calendarsnack.pdf
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...
 
MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Advantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your BusinessAdvantages of Odoo ERP 17 for Your Business
Advantages of Odoo ERP 17 for Your Business
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
Alfresco TTL#157 - Troubleshooting Made Easy: Deciphering Alfresco mTLS Confi...
 
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
办理学位证(UQ文凭证书)昆士兰大学毕业证成绩单原版一模一样
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving CarsSensoDat: Simulation-based Sensor Dataset of Self-driving Cars
SensoDat: Simulation-based Sensor Dataset of Self-driving Cars
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
How to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion ApplicationHow to submit a standout Adobe Champion Application
How to submit a standout Adobe Champion Application
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
 
Powering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data StreamsPowering Real-Time Decisions with Continuous Data Streams
Powering Real-Time Decisions with Continuous Data Streams
 
Comparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdfComparing Linux OS Image Update Models - EOSS 2024.pdf
Comparing Linux OS Image Update Models - EOSS 2024.pdf
 
Folding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a seriesFolding Cheat Sheet #4 - fourth in a series
Folding Cheat Sheet #4 - fourth in a series
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 

MongoDB Breakfast Milan - Mainframe Offloading Strategies

  • 1. www.mongodb.com Breakfast con MongoDB July 6th 2017 Come realizzare una soluzione di Mainframe Offloading
  • 2. • Dal Mainframe all’Operational Data Store • Demo Live di Quantyca • Storie di modernizzazione del mainframe • Il futuro: PSD2 e Blockchain • Cerved: Viaggio nellaAPI Economy • Chiusura e Domande e Risposte Agenda
  • 3. MongoDB en cifras 800+ employees 3,000+ customers 22 office world wide $311M in fundings 30M downloads #1 NoSQL Database
  • 4. Customers MongoDB Office Support MongoDB User Groups 30+ Million Downloads
  • 5. Let our team help you on your journey to efficiently leverage the capabilities of MongoDB, the data platform that allows innovators to unleash the power of software and data for giant ideas. The largest Financial Services and, Communications and Government Organizations are working with MongoDB to Modernize their Mainframes to Reduce Cost and Increase Resilience Being successful with MongoDB for Mainframes 5-10xDeveloper Productivity We help our customers to increase overall output, e.g. in terms of engineering productivity. 80%Mainframe Cost Reduction We help our customers to dramatically lower their total cost of ownership for data storage and analytics by up to 80%.
  • 6. Challenges of Mainframes in a Modern World There are three areas of Data Management. In the legacy world these have been disconnected with many technologies attempting to achieve an integrated the landscape. AdaptabilityCost Risk Unpredictable Loads Planned/Unplanned Downtime Expensive Ecosystem Change Management Access to Skills Capacity Management Business Process Risk Operational Complexity Customer Experience
  • 7. Analytics Offload Producers Files Orders MDM Logistics (…) Mainframe STAGING (E)TL, e.g. Abinitio, Talend, Informatica Consumers / Channels Web Mobile B2C CMS (…) APIs DWH
  • 9. Limits • High Implementation Effort • Transformation and standardization in a harmonic data model is the DWH heart. • Rigidity • The data model is pre-defined and rigid and is difficult to expand it to integrate additional external sources • No Raw Data. • Data Volume • DWH can manage high volume of data but with dedicated databases and optimized hardware.
  • 10. Rise of Data Lakes • Many companies started to look at a Data Lake architecture • Platform to manage data in a flexible • To aggregate and put in relation data cross-silos in a single repository • Exploration of all the data • The most common platform is Hadoop • Allows horizontal scalability on commodity hardware • Allows storing heterogeneous data with a read optimized model • Include working layer in SQL and common languages • Great references (Yahoo e Google)
  • 12. How is Working? • Source data are loaded as they are in a raw data layer without any transformation • The Technology is not based on a RDBMS but on a file system (HDFS di Hadoop for example) • The queries are executed directly on the raw data and are much more complex to write since must contains also the logic of harmonization and consolidation of data • The add of the source of information is quite easy
  • 13. Typical High Level Architecture Producers Files Orders MDM Logistics (…) Mainframe (E)TL, e.g. Abinitio, Talend, Informatica Consumers / Channels Web Mobile B2C CMS (…) APIs DATA LAKE
  • 14. And for non analytical queries? • Data Lakes are designed to provide the Hadoop output to online applications. These applications can have requirements like: • Response time in ms • Random Access on small indexed subset of data • Expressive queries • Frequent real-time updates
  • 15. Reference Architecture / 3 Producers Files Orders MDM Logistics (…) Legacy DBs (E)TL, e.g. Abinitio, Talend, Informatica Consumers / Channels Web Mobile B2C CMS (…) APIs DATA LAKE COLUMN DB SEARCH ENGINE
  • 16. Issues • Data Model • Is designed for Analytics not for the applications • Not Real-Time • ETL process, long time to traverse all the components • SearchDB limited in query capabilities • ColumnDB limited in data model flexibility and HA
  • 17. Legacy Optimization with MongoDB • Extending legacy applications to new channels & users • Mobile & web, IoT sensor data integration, single view of data • Different data access, performance and scalability requirements • Conserve valuable MIPS, avoid resource contention • Move from a product-centric to a customer-centric schema Business Drivers Solution • Operational Data Store built on MongoDB
  • 18. Banks Situation: The Past • Banks used mainframe systems to store critical data to that was at the core of their business. Account data, transaction data, other core financial data, etc. • Internal front office systems were used by bank employees (“Bankers”) to access this critical data and deliver it to clients. • Critical bank practices (data governance, backup, HA/DR, auditing, etc.) evolved with mainframe systems as a core element of the practices and solutions. Mainframes were the golden source of data before we invented the term “golden source” .
  • 19. The Past Interactions Front office applications Teller systems Advisor tools Get account info Get investment Info Risk Main frame infrastructureClient Banker Data requests Manage client profile ATM Get/update account balanceDirect access
  • 20. Present Mobile app functionality Get account summary for Ryan Get client summary for Ryan Mainframe. Mobile Get last 10 mobile touchpoints for Ryan Other data sources Load home screen Web Load home screen Logic (service calls, orchestration, aggregation, data transformation) Web app functionality Get account summary for Ryan Get client summary for Ryan Get last 7 days web activity for Ryan Logic (service calls, orchestration, aggregation, data transformation) Teller / POS app functionality Get account summary for Ryan Get client summary for Ryan Get some bank confidential data Reference data Specialized integration point Account services Client data services Access log (filter = mobile) Confidential data access DATA Access log filter = web) Account services Client data services Account services Client data services These operations are typically read on demand with caching et. al.. Client Banker
  • 21. Use Case Montly UK Payday Problem Why MongoDB Results Situation Technology Changes Cost Almost everyone gets paid on the same day here in London (not like in NA), once a month at the end of the calendar month. Almost everyone in London uses their bank mobile app to check their balance on the same day, once a month. People used to have to go to the bank to get your balance but now everyone has a phone on their app they can use to check their balance. The fact that it’s so easy to check bank balance combined with the significant time between paydays means that most people check their balance on payday at least once if not multiple times. The mainframe infrastructure to support this spike in traffic due to this use case costs banks millions dollars a year.
  • 22. An abstract, logical view If we strip away some details to investigate a more fundamental, abstract, logical view of the current state, it looks like this. LOGIC Function app + data logic Other services Need data Thing app + data logic Need data App app + data logic Need data Other app + data logic Need data MAIN FRAME SVCS + INFRA MAIN FRAME DATA Other data sources Reference data
  • 23. Some simple changes moving the app/data logic out of clients and replacing it with data logic between the legacy DB and the MongoDB ODS, we can deliver significant value to reduce complexity and redundancy. write DATA LOGIC Function rules Other services Use data Analysi s transform Use data Other Use data Other data sources MongoDB ODS services audit LEGACY DBS LEGACY DATA App Use data Thing Use data schema orchestrate plumbing Reference data rea d Batch Near real time Real time
  • 24. Why MongoDB • Data Model Optimized • The data are transformed during the replication phase to a customer- centric data model, instead of a product-centric one, this is called ”Single View” • Flexibility • The flexible document-based model remove the need to update the schema to store new info • Architecture for Modern Applications • MongoDB has a scalable, geographically distributed architecture for always on services. • Integration with Analytical Systems
  • 25. Benefits • Reduce Development Time • The customer-centric data model and the MongoDB flexibility and document data representation reduce by more than 50% the development time • Improve the SLA • MongoDB scalable always on architecture improves the uptime and the performance, providing a better customer experience • Reduce Costs • Move MIPS from legacy to commodity low cost hardware
  • 26. Consumers / ChannelsProducers Reference Architecture Files Mainframe Additional data sources & systems Files Web Mobile B2C CMS (…) APIs API (Authorisation, Authentication, Logging, etc.) (E)TL, e.g. Abinitio, Talend, Informatica, Custom CDC Process (Change Data Capture) (Attunity) Message Queues, e.g. Kafka, ActiveMQ, RabbitMQ Orders MDM Logistics (…) Optional Operational Data Store
  • 27. UniversalReal-TimeSimpler Attunity Replicate Universal Platform for Data Replication #1 Independent Provider of Streaming CDC
  • 29. Attunity Replicate Rapidly Move Data Across Complex Hybrid Environments TargetsSources On Premises Cloud Platform HadoopRDBMS Data Warehouse Hadoop RDBMS Data Warehouse WAN-Optimized Data Transfer Compression | Multi-pathing Encryption
  • 30. Attunity Replicate Go Agile with Automated Processes • Target schema creation • Heterogeneous data type mapping • Batch to CDC transition • DDL change propagation • Filtering • Transformations Hadoop Fil es RDBM S Mainfram e Hadoop Files RDBMS Kafka EDW EDW
  • 31. Attunity Replicate Zero Footprint Architecture • CDC identifies source updates by scanning change logs • No software agents required on sources or targets • Minimizes administrative overhead Low Infrastructure Impact • Log based CDC • Source specific optimization • Uses native DB clients for security Hadoop Fil es RDBM S Mainfram e Hadoop Files RDBMS Kafka EDW EDW
  • 32. Attunity Replicate Streaming CDC to Apache Kafka CDC MSG n 2 1 MSG MSG Data Streaming Transaction logs In Memory Optimized Metadata Management and Data Transport JSON/ Avro data format Bulk Load MSG n 2 1 MSG MSG Data Streaming JSON/ Avro Schema format Message Topic Batch data Schema Topic Real-Time Data Flow
  • 33. Attunity Replicate User Interface • Intuitive web-based GUI • Drag and drop, wizard-assisted configuration steps • Consistent process for all sources and targets Guided User Experience
  • 34. Attunity Enterprise Manager (AEM) Manage Data Ingest and Replication At Scale • Centralize design and control § Loading, mapping, DDL changes § Stop, start, resume, reload § Automated status discovery • Manage security controls § Granular access controls § Full audit trail • Customize your views § Group, search, filter, sort and drill down on tasks § Respond to real-time alertsLeverage graphical dashboard and APIs (REST and .Net)
  • 36. Demo! The scenario • 3 Tables: • Customer • Account • Operations (transactions) • Operations change the balance of an account in the same transaction • The same transactional integrity has to be replicated to the MongoDB data store
  • 37. Consumers / ChannelsProducers Reference Architecture with a twist Files Mainframe Additional data sources & systems Files Web Mobile B2C CMS (…) APIs API (Authorisation, Authentication, Logging, etc.) (E)TL, e.g. Abinitio, Talend, Informatica, Custom CDC Process (Change Data Capture) (Attunity) Message Queues, e.g. Kafka, ActiveMQ, RabbitMQ Orders MDM Logistics (…) Optional Operational Data Store
  • 38. What is Kakfa? • Kafka is a distributed publish-subscribe messaging system organized in topics. • It’s designed to be • Fast • Scalable • Durable • When used in the right way and for the right use case, Kafka has unique attributes that make it a highly attractive option for data integration.
  • 39. First approach CDC Process (Change Data Capture) C A O A O A O … Transaction • All DBMS transactions are in the same topic and the order is guaranteed • Kafka persists the data sequentially in MongoDB • No other tools are needed, all processing and persistence is managed inside Kafka and MongoDB • Higher parallelism can be achieved by partitioning in Kafka by Customer ID
  • 40. Second approach – Distributed Transactions A A A A A A A A CDC Process O O O O O O O O Public.Account Public.Transaction CDC Process A A A A A A A A O O O O O O O O Private.Account Private.Transaction Change Message Key Join AO AO AO AO AO AO AO AO
  • 43. 5 phases of Legacy Offloading MongoDB can help you offload MIPS from the legacy systems, save double-digits in cost and increase agility and capabilities for new use cases at the same time. Scope BusinessBenefits Transactions are written first to MongoDB, which passes the data on to the legacy system of record. Writes are performed concurrently to the legacy as well as MongoDB (Y-Loading), e.g. via a service-driven architecture. The Operational Data Layer (ODL) data is enriched with additional sources to serve as operational intelligence platform for insights and analytics. Enriched ODL Records are copied via CDC/Delta Load mechanism from the legacyDB into MongoDB, which serves as Operational Data Layer (ODL), e.g. for frequent reads. Operational Data Layer (ODL) “MongoDB first” “Y-Loading” System of Record MongoDB serves as system of record for a multitude of applications, with deferred writes to the legacy if necessary. Offloading Reads Transforming the role of the mainframe Offloading Reads & Writes
  • 44. Offloading Reads Initial use cases primarily focus on offloading costly reads, e.g. for querying large numbers of transactions for analytics or historical views across customer data. Application Application Mainframe Mainframe Operational Data Layer (ODL) Using a change data capture (CDC) or delta load mechanism you create an operational data layer alongside the mainframe that serves read-heavy operations. Additional data sources Files Enriched Operational Data Layer (ODL) Additional data sourced are loaded into the ODS to create an even richer picture of your existing data and enable additional use cases like advanced analytics. Writes Reads Reads Writes 100% 10-50%50-90% Writes Reads 100% 25-75%25-75% Writes Reads
  • 45. Offloading Reads & Writes By introducing a smarter architecture to orchestrate writes concurrently, e.g. via a Microservices architecture, you can shift away from delayed CDC or delta load mechanisms. Mainframe Additional data sources Files Reads Y-Loading Writing (some) data concurrently into the mainframe as well as MongoDB enables you to further limit interactions with the mainframe technology . It also sets you up for a more transformational shift of the role of the mainframe with regards to your enterprise architecture. Application 10-25%75-90% 40-80%20-60% Writes Reads Microservices / API Layer Writes
  • 46. Transforming the role of the legacy With a shift towards writing to MongoDB first before writing to the mainframe (if at all) you are further changing the meaning of “system of record” and “mainframe” within the organisation. Mainframe Additional data sources Files System of Record MongoDB serves as main System of Record, with writes optionally being passed on to the mainframe for legacy applications only or it gets decommissioned entirely. Mainframe Additional data sources Files “MongoDB first” Transactions first write to MongoDB, which can serve as buffer before it passes transactions to the mainframe as System of Record. Writes Processing 20-50%50-80% 60-90%10-40% Writes Reads 50-90%10-50% 90-100%0-10% Writes Reads Application Microservices / API Layer Reads Writes Application Microservices / API Layer Reads Writes
  • 47.
  • 48. Mainframe Offloading enables insight through Single View of Customer Spanish bank replaces Teradata and Microstrategy to increase business insight and avoid significant cost Problem Why MongoDB Results Problem Solution Results Branches required an application that offered all information about a given customer and all his contracts (accounts, loans, cards, etc.). Multi-minute latency for accessing customer data stored in Teradata and Microstrategy. In addition, accessing data frequently from the legacy systems would cause spikes in MIPS and related cost. Offloaded to MongoDB where data is highly-available and can be accessed by new applications and channels. Built single view of customer on top of MongoDB – flexible and scalable app, easy to adapt to new business needs. Super fast, ad hoc query capabilities (milliseconds), and real-time analytics thanks to MongoDB’s Aggregation Framework. Can now leverage distributed infrastructure and commodity hardware for lower total cost of ownership and greater availability. Cost avoidance of 10M$+ Application developed and deployed in less than 6 months. New business policies easily deployed and executed, bringing new revenue to the company. Current capacity allows branches to load instantly all customer info in milliseconds, providing a great customer experience. New applications and services can be built on the same data platform without causing MIPS/cost or increasing risk by putting more stress on legacy systems.
  • 49. Problem Why MongoDB ResultsProblem Solution Results High licensing costs from proprietary database and data grid technologies Data duplication across systems with complex reconciliation controls High operational complexity impacting service availability and speed of application delivery Implemented a multi-tenant PaaS with shared data service based on MongoDB, accessed via a common API with message routing via Kafka Standardized data structures for storage and communication based on JSON format Multi-sharded, cross-data center deployment for scalability and availability $ millions in savings after migration from Coherence, Oracle database and Microsoft SQL Server Develop new apps in days vs months 100% uptime with simplified platform architecture, higher utilization and reduced data center footprint Database-as-a-Service Migration from Oracle & Microsoft to create a consolidated “data fabric” reduces $m in cost, speeds application development & simplifies operations
  • 50. During their recent FY 2016 Investor Report, RBS CEO Ross McEwan highlighted their MongoDB Data Fabric platform as a key enabler to helping the Bank reduce cost significantly and dramatically increase the speed at which RBS can deploy new capabilities. “Data Fabric will help reduce cost significantly and dramatically increase the speed at which we can deploy new capabilities for our customers” -Ross McEwan, CEO RBS RBS’s Investor Report FY’16
  • 52. Future Challenges • Open Banking and Open Data • Provides API for 3rd party applications • Unpredictable inbound traffic • Requires an elastic infrastructure • Provisioning time very high • Blockchain • Will have a huge impact to manage transactions
  • 53. CloudProducers Future Architecture Files Mainframe Additional data sources & systems Files Consumers / Channels Web Mobil e B2C CMS (…) APIs INTERNAL API Consumer (Read, Transform, Apply) Attunity CDC Kafka Orders MDM Logistics (…) Optional On Premise Operational Data Store Optional: Write-back mechanism Consumer (Read, Transform, Apply) OPEN API Third Party Payment Web Mobile
  • 55. Stefano Gatti – Head of Innovation and Data Sources 06 Luglio 2017 Cerved Api Il viaggio dal progetto alla piattaforma
  • 56. 56 Cerved e i suoi “dati” Il dato «liquido» e l’algoritmo «solido» Benvenuti nella Api Economy ma ... Cerved Api Ecosystem Le prossime sfide Indice
  • 57. Cerved e i suoi dati
  • 58. 58 Aree Business & Numeri CREDIT INFORMATION Tutelarsi dal rischio di credito MARKETING SOLUTIONS Crescere con nuove opportunità di business CREDIT MANAGEMENT Gestire e recuperare i crediti in sofferenza 1000 report/min ü Documenti 40 milioni ü Linee di codice SW 34,000 üClienti 60 milioni üDati di Pagamenti 2,000 ü Persone 377 milioni Euro (2016) üRicavi
  • 59. 59 La nostra V più importante: “Variety” Web Data Open Data Dati proprietari Dato ufficiale non camerale Dato ufficiale camerale A c c u r a c y C o m p l e s s i t à
  • 60. 60 L’infrastruttura di erogazione Sourcing Business Rules Prodotti Erogazione Operations Piattaforma 1 PB Byte dati 3000 business-rules 600 milioni di eventi dati di monitoraggio all’anno 350 operatori su sw interno 50 siti web di erogazione > 200 progetti B2B > 500 prodotti 80% evasioni time-critical 1350 server 18 dei primi 30 database più diffusi in produzione
  • 61. 61 Il contesto API e i suoi numeri Servizi/Report SOAP: oltre 1000 Microservices: 3.000 Ricerche Anagrafiche: 110.000 Calcoli Rating 300.000 Chiamate a Servizi 10.000.000 Intra Farm: 5.000.000 Legacy: 2.500.000 SOAP: 1.500.000 REST: 1.000.000 Eventi su Dati 4.500.000 Operazioni su Storage documentale 6.500.000 Blocchi di Informazione Erogati: 5.000.000 UN GIORNO IN CERVED:
  • 62. Il dato “liquido” e l’algoritmo “solido”
  • 63. 63 La Big Data Economy Il dato liquido: pervade le aziende in crescita …
  • 64. 64 L’arrivo della Algorithmic Economy? Gli algoritmi danno solidità e usabilità al mondo «big data»
  • 65. 65 Big data & algorithms Per prendere decisioni migliori … «Co-fondatore del MIT Media Lab, pioniere della human-machine interaction e fra i data scientist più importanti del mondo» Sandy Pentland Fonte: http://www.betterdecisions.it
  • 66. 66 API Economy trend Uno strumento per rendere più agile la pervasività di dati e algoritmi 20162012 Trend tecnologico Trend business
  • 67. Benvenuti nella Api Economy ma …
  • 68. Approccio API Economy Data & Algorithmic-driven Overview OFFERTA Cerved API Diversi tipi di “Consumer” BUSINESS (es:%Sales%&%Marketing,%Finance% &%Operations,%Strategy and% Development,%…) IT (es: Banche, Istituzioni Finanziarie, Grandi Imprese, Agenzie Web, Blogger di News Economiche, System Integrator, Sviluppatori di Software Gestionale, PMI, …) BUSINESS LINE IT API come “building block” secondo un modello “a store”
  • 69. API Economy e la vision Cerved Algoritmi Dati Algoritmi personalizzati Algoritmi DatiDati esterni Dati Clienti – Partner – Terze Parti Cerved platformLivello Integrazione Dati Start-up, Developers Componenti innovativi Open innovation, hackathonConnettori Applicazioni API Portal + API Gateway raw data Algoritmi personalizzati Algoritmi Dati esterni Dati Connettori Applicazioni Connettori Applicazioni
  • 70. API non solo come tecnologia ma come prodotto Ongoing lesson learned … • Bisogni del Cliente • Usabilità sviluppatori • Design • Manutenzione e supporto • Un business agile costruito su una piattaforma di APIs Focus su
  • 72. Cerved API ArchitectureApigeeCerved Api Dev Portal CAS login IMS Check User check user Apigee Enterprise Cerved API EndPoint DB Rel (Oracle) BR Static BR dynamic Api request • SLA • Carico sul DB Rel • Prestazioni BR Problemi
  • 73. Cerved API Architecture (hybrid cloud)ApigeeCerved Api Dev Portal CAS login IMS Check User check user Apigee Enterprise DB Rel (Oracle) Graph DB (Neo4J) Doc. DB (Mongo DB) BR Static BR dynamic Api request Cloud DB Cloud Cerved API EndPoint API Cloud
  • 74. 74 Cerved&MongoDb: presente e futuro OLTP Cerved (Oracle, Teradata) Business LogicCerved API Marketplace Sistem Integrators Partners Esterni Loader Loader MongoDb Tools Data Science (ML, BI, DL, …) Data Lake (Cloudera) MongoDb Caching Layer Generator e EventiMongoDb Caching Layer Cloud (AWS, Google, …)
  • 76. Cerved developer portal API di ricerca nell’intero tessuto economico italiano Per integrare e cercare in qualunque soluzione o caso d’uso: • Tutte le aziende italiane • La parte più importante delle attività economiche non iscritte • Le persone del tessuto economico italiano Con i principali dettagli anagrafici
  • 77. Cerved developer portal API che fornisce lo score su tutte le aziende italiane Credit scoring calcolato event-driven su tutte le aziende italiane con grading su diverse dimensioni dell’azienda
  • 78. Cerved developer portal The Italian Business Graph Apis Antifrode Procurement Business Investigatio Business Scouting Data Journalism
  • 79. Cerved developer portal Area Lab: powering innovation
  • 81. Le prossime sfide ü Integrare al meglio il business nel ciclo di sviluppo e manutenzione ü Rafforzare il rapporto con i nostri Clienti migliorandone l’esperienza Ø Aumentare la velocità di pubblicazione di API innovative Ø Integrare API di partner e aziende del gruppo Ø Rafforzare l’ecosistema della