SlideShare a Scribd company logo
1 of 23
Download to read offline
Ashok Narayanan
               Dave Oran
                  Won So
Naveen Nathan (UCI/Cisco)
                   Cisco
!  Applications define objects of arbitrary size
!  Therefore, NDN needs to be able to cache
   and deliver objects of arbitrary size
  •  The extreme case: a stream, which is infinite in size
  •  More realistic: a maximum object size that will serve
    the vast majority of common applications
!  This   problem needs to be solved between:
  •  The naming convention
  •  The network protocol
  •  The application
!  Natural link MTUs are small
!  There’s a gap between natural MTU sizes
   and natural object sizes
!  There are five options to bridge this gap
     1.    Applications must size object to network MTU
     2.    Rely on lower layer fragmentation & reassembly
     3.    Naming convention to identify fragments
     4.    Publisher-authored manifest to list fragments
     5.    In-network fragmentation within NDN

!     The truth lies in some combination of these
!  What   are “natural object sizes”?
  •  Hard to tell, but some estimates exist…
!  Web  pages average: 16KB per page, 120KB for
   CSS/scripts etc
!  Pictures average: 100KB per uploaded photo,
   2MB per stored photo
!  Video: 500KB-15MB per ABR video chunk
!  Email: 75KB avg, 22KB-400KB spread
!  Document per page: PDF 62KB, .DOC 25KB
!  Object-based     formats cannot support an infinite
  object size
   •  It’s basically a question of picking a maximum object size
!  It’s
      unreasonable for applications to size objects
  to the smallest possible MTU
   •  Applications don’t know which link the data will traverse
!  Relying   on underlying fragmentation is fragile
   •  IP fragmentation suffers from packet loss amplification &
      reassembly delays (3x for CCNx)
   •  Other lower layer transports may offer no fragmentation
!  CCNxcurrently relies on (2+3) – naming
 convention for 4KB objects, and lower layer
 fragmentation to carry these
  •  Requires fragmentation features in lower layer
  •  Architectural early binding to small but MTU-
     independent chunk size
  •  Imposes restrictions on naming
!  Manifest-based    schemes slightly better (??)
  •  Publisher selects private scheme for chunk naming
  •  Object fetch retrieves a manifest of chunk names
  •  Still requires MTU-independent fragment size
! IfCCNx is truly to operate on any link
  layer, fragmentation needs to be built
  into the CCNx layer
   •  Can’t rely on lower layer to support
      fragmentation
   •  Can’t size fragments to MTU at publication



! Given that the CCNx layer can fragment
  and reassemble, how far should it go?
! Publishers  sign the complete object
! Any node can (re-)fragment at will
! Each fragment should be completely
  identifiable as part of its object
! Fragment cut-though forwarding without
  reassembly
  •  Critical to remove latency penalty
Fragment-Info    ContentSize
 Signature
                Signature     FragmentOffset
  Name
                 Name         FragmentSize
Signed-Info
               Signed-Info     Key (Name)
                 Data …


              Fragment-Info
   Data        … Data …



              Fragment-Info
                 … Data
! Fragments     match a pending interest
  •  … but don’t satisfy it
! Node immediately forwards fragments
  towards matching PIT entry face(s)
! Node keeps track of fragment set in PIT
  •  Consume entry once all fragments are forwarded
! End-hostconsumes reassembled object
! Cache delivers whole objects
  •  Physical reassembly not actually required
!  Flow   balance is changed
  •  Significant issue for large objects
  •  Requires a per-hop congestion control schemes
     which can handle variant object sizes
  •  We believe an appropriate scheme exists
!  Ack/Nack    scheme required?
  •  Optional if lower layer is reliable
  •  Nack: Add a “subrange fetch” field to the Interest?
  •  Reliability can be end-to-end or hop-by-hop
  •  Degenerate solution: small objects, no repair
!  Individual
            fragments cannot be
  authenticated in the network
   •  Complete object can be authenticated today
   •  Reassemble and authenticate only at key points?
   •  We’re investigating whether this can be solved
!  End-to-end flow control is altered
   •  Can deliver (a lot of) data towards a dead or
      uninterested client
   •  Depending on maximum size of object, a “cancel
      interest” message may be required
   •  We’re investigating in which cases this is needed
!  What runs above CCNx fragmentation?
   1.  Name-based object chunking
   2.  Manifest-based sub-object chunking
   3.  Nothing (Application-level chunking)
!  Choice of 1 vs 2 depends on palatability        of naming
   conventions for fragmentation
!  As the maximum supported object size increases,
   option 3 is more viable
  •  But this only makes sense if the aforementioned problems are
     brought under control
  •  We’ll experiment with our implementation to determine a good
     maximum CCNx object size, and a chunking strategy
!  Beyondreplacing IP fragmentation with CCNx
  fragmentation, we are not yet ready to make a
  recommendation.
! Implemented this fragmentation scheme
 in ccnx codebase
  •  Initial demo – fragmentation, reassembly,
   midpoint cut-through forwarding
! Further   investigation:
  •  Congestion control
  •  Hop-by-hop reliability
  •  Flow control
  •  Different maximum object sizes
R1           R2           R3          R4           R5           R6




   Source
                                                                     Sink
(/src/test)

•  R1, R2, …, R6 run ccnd modified to support Link Fragmentation.

•  Sink node runs: ccnget /src/test

•  Source node runs: ccnput /src/test < test16KB
   (payload is 16000B; generates ~16KB ccnb-encoded packet)

•  ccnd connect using UDP over Ethernet (1500 MTU)
    - Accounting for IP/UDP headers and trailing Sequence packets,
      UDP face MTU is 1500-20-8-9 = 1463
! Individualfragments should be
  forwarded independent of others
! Per-hop reassembly can incur significant
  delays
!  Nodes    keep track of received and
   transmitted fragments in the PIT entry
!  It is therefore possible to discover loss of
   fragments between nodes and re-request
   delivery
!  Packet loss timers for hop-by-hop reliability
   are lower than for end-to-end reliability
  •  Loss timers are a multiple of maximum path RTT
  •  Hop-by-hop reliability timers can be 30ms where
   end-to-end path reliability timers are ~500ms

More Related Content

What's hot

Replication in the Wild - Warsaw Cloud Native Meetup - May 2017
Replication in the Wild - Warsaw Cloud Native Meetup - May 2017Replication in the Wild - Warsaw Cloud Native Meetup - May 2017
Replication in the Wild - Warsaw Cloud Native Meetup - May 2017Ensar Basri Kahveci
 
Securing Sharded Networks with Swarm
Securing Sharded Networks with SwarmSecuring Sharded Networks with Swarm
Securing Sharded Networks with SwarmFluence Labs
 
Client Centric Consistency Model
Client Centric Consistency ModelClient Centric Consistency Model
Client Centric Consistency ModelRajat Kumar
 
Microservices 101: Exploiting Reality's Constraints with Technology
Microservices 101: Exploiting Reality's Constraints with TechnologyMicroservices 101: Exploiting Reality's Constraints with Technology
Microservices 101: Exploiting Reality's Constraints with TechnologyLegacy Typesafe (now Lightbend)
 
Google file system
Google file systemGoogle file system
Google file systemDhan V Sagar
 
Hermes Reliable Replication Protocol - ASPLOS'20 Presentation
Hermes Reliable Replication Protocol -  ASPLOS'20 PresentationHermes Reliable Replication Protocol -  ASPLOS'20 Presentation
Hermes Reliable Replication Protocol - ASPLOS'20 PresentationAntonios Katsarakis
 
Design Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational DatabasesDesign Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational Databasesguestdfd1ec
 
VeriFlow Presentation
VeriFlow PresentationVeriFlow Presentation
VeriFlow PresentationKrystle Bates
 
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...Continuent
 
Deep learning architectures
Deep learning architecturesDeep learning architectures
Deep learning architecturesJoe li
 
Introduction to OpenHFT for Melbourne Java Users Group
Introduction to OpenHFT for Melbourne Java Users GroupIntroduction to OpenHFT for Melbourne Java Users Group
Introduction to OpenHFT for Melbourne Java Users GroupPeter Lawrey
 
The Zen of High Performance Messaging with NATS (Strange Loop 2016)
The Zen of High Performance Messaging with NATS (Strange Loop 2016)The Zen of High Performance Messaging with NATS (Strange Loop 2016)
The Zen of High Performance Messaging with NATS (Strange Loop 2016)wallyqs
 
Open HFT libraries in @Java
Open HFT libraries in @JavaOpen HFT libraries in @Java
Open HFT libraries in @JavaPeter Lawrey
 
Low level java programming
Low level java programmingLow level java programming
Low level java programmingPeter Lawrey
 
Cassandra Summit 2014: Active-Active Cassandra Behind the Scenes
Cassandra Summit 2014: Active-Active Cassandra Behind the ScenesCassandra Summit 2014: Active-Active Cassandra Behind the Scenes
Cassandra Summit 2014: Active-Active Cassandra Behind the ScenesDataStax Academy
 
Message Queue (MQ) Testing
Message Queue (MQ) TestingMessage Queue (MQ) Testing
Message Queue (MQ) TestingUjjwal Gupta
 
Apache Pulsar Seattle - Meetup
Apache Pulsar Seattle - MeetupApache Pulsar Seattle - Meetup
Apache Pulsar Seattle - MeetupKarthik Ramasamy
 
Low latency in java 8 v5
Low latency in java 8 v5Low latency in java 8 v5
Low latency in java 8 v5Peter Lawrey
 
Cloud computing Module 2 First Part
Cloud computing Module 2 First PartCloud computing Module 2 First Part
Cloud computing Module 2 First PartSoumee Maschatak
 
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17Gwen (Chen) Shapira
 

What's hot (20)

Replication in the Wild - Warsaw Cloud Native Meetup - May 2017
Replication in the Wild - Warsaw Cloud Native Meetup - May 2017Replication in the Wild - Warsaw Cloud Native Meetup - May 2017
Replication in the Wild - Warsaw Cloud Native Meetup - May 2017
 
Securing Sharded Networks with Swarm
Securing Sharded Networks with SwarmSecuring Sharded Networks with Swarm
Securing Sharded Networks with Swarm
 
Client Centric Consistency Model
Client Centric Consistency ModelClient Centric Consistency Model
Client Centric Consistency Model
 
Microservices 101: Exploiting Reality's Constraints with Technology
Microservices 101: Exploiting Reality's Constraints with TechnologyMicroservices 101: Exploiting Reality's Constraints with Technology
Microservices 101: Exploiting Reality's Constraints with Technology
 
Google file system
Google file systemGoogle file system
Google file system
 
Hermes Reliable Replication Protocol - ASPLOS'20 Presentation
Hermes Reliable Replication Protocol -  ASPLOS'20 PresentationHermes Reliable Replication Protocol -  ASPLOS'20 Presentation
Hermes Reliable Replication Protocol - ASPLOS'20 Presentation
 
Design Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational DatabasesDesign Patterns for Distributed Non-Relational Databases
Design Patterns for Distributed Non-Relational Databases
 
VeriFlow Presentation
VeriFlow PresentationVeriFlow Presentation
VeriFlow Presentation
 
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...
Webinar Slides: Tungsten Connector / Proxy – The Secret Sauce Behind Zero-Dow...
 
Deep learning architectures
Deep learning architecturesDeep learning architectures
Deep learning architectures
 
Introduction to OpenHFT for Melbourne Java Users Group
Introduction to OpenHFT for Melbourne Java Users GroupIntroduction to OpenHFT for Melbourne Java Users Group
Introduction to OpenHFT for Melbourne Java Users Group
 
The Zen of High Performance Messaging with NATS (Strange Loop 2016)
The Zen of High Performance Messaging with NATS (Strange Loop 2016)The Zen of High Performance Messaging with NATS (Strange Loop 2016)
The Zen of High Performance Messaging with NATS (Strange Loop 2016)
 
Open HFT libraries in @Java
Open HFT libraries in @JavaOpen HFT libraries in @Java
Open HFT libraries in @Java
 
Low level java programming
Low level java programmingLow level java programming
Low level java programming
 
Cassandra Summit 2014: Active-Active Cassandra Behind the Scenes
Cassandra Summit 2014: Active-Active Cassandra Behind the ScenesCassandra Summit 2014: Active-Active Cassandra Behind the Scenes
Cassandra Summit 2014: Active-Active Cassandra Behind the Scenes
 
Message Queue (MQ) Testing
Message Queue (MQ) TestingMessage Queue (MQ) Testing
Message Queue (MQ) Testing
 
Apache Pulsar Seattle - Meetup
Apache Pulsar Seattle - MeetupApache Pulsar Seattle - Meetup
Apache Pulsar Seattle - Meetup
 
Low latency in java 8 v5
Low latency in java 8 v5Low latency in java 8 v5
Low latency in java 8 v5
 
Cloud computing Module 2 First Part
Cloud computing Module 2 First PartCloud computing Module 2 First Part
Cloud computing Module 2 First Part
 
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
Multi-Cluster and Failover for Apache Kafka - Kafka Summit SF 17
 

Viewers also liked

CCNxCon2012: Session 5: Steaming up CCN against TCP
CCNxCon2012: Session 5: Steaming up CCN against TCPCCNxCon2012: Session 5: Steaming up CCN against TCP
CCNxCon2012: Session 5: Steaming up CCN against TCPPARC, a Xerox company
 
CCNxCon2012: Session 3: Juxtaposition of CCN and Pepys
CCNxCon2012: Session 3: Juxtaposition of CCN and PepysCCNxCon2012: Session 3: Juxtaposition of CCN and Pepys
CCNxCon2012: Session 3: Juxtaposition of CCN and PepysPARC, a Xerox company
 
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN Networks
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN NetworksCCNxCon2012: Session 4: Disjoint Path Discovery in CCN Networks
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN NetworksPARC, a Xerox company
 
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issues
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issuesCCNxCon2012: Session 3: Content-centric VANETs: routing and transport issues
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issuesPARC, a Xerox company
 
CCNxCon2012: Session 1: CCN Updates & Roadmap
CCNxCon2012: Session 1: CCN Updates &  RoadmapCCNxCon2012: Session 1: CCN Updates &  Roadmap
CCNxCon2012: Session 1: CCN Updates & RoadmapPARC, a Xerox company
 
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...PARC, a Xerox company
 
CCNxCon2012: Session 4: Caesar: a Content Router for High Speed Forwarding
CCNxCon2012: Session 4: Caesar:  a Content Router for High Speed ForwardingCCNxCon2012: Session 4: Caesar:  a Content Router for High Speed Forwarding
CCNxCon2012: Session 4: Caesar: a Content Router for High Speed ForwardingPARC, a Xerox company
 
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...PARC, a Xerox company
 
CCNxCon2012: Session 2: DASH over CCN: A CCN Use-Case for a SocialMedia Base...
CCNxCon2012: Session 2: DASH over CCN:  A CCN Use-Case for a SocialMedia Base...CCNxCon2012: Session 2: DASH over CCN:  A CCN Use-Case for a SocialMedia Base...
CCNxCon2012: Session 2: DASH over CCN: A CCN Use-Case for a SocialMedia Base...PARC, a Xerox company
 
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R Networks
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R NetworksCCNxCon2012: Session 3: NDN Applicability to V2V and V2R Networks
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R NetworksPARC, a Xerox company
 
CCNxCon2012: Welcome: Event Kickoff & Opening Remarks
CCNxCon2012: Welcome: Event Kickoff & Opening RemarksCCNxCon2012: Welcome: Event Kickoff & Opening Remarks
CCNxCon2012: Welcome: Event Kickoff & Opening RemarksPARC, a Xerox company
 
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...PARC, a Xerox company
 

Viewers also liked (12)

CCNxCon2012: Session 5: Steaming up CCN against TCP
CCNxCon2012: Session 5: Steaming up CCN against TCPCCNxCon2012: Session 5: Steaming up CCN against TCP
CCNxCon2012: Session 5: Steaming up CCN against TCP
 
CCNxCon2012: Session 3: Juxtaposition of CCN and Pepys
CCNxCon2012: Session 3: Juxtaposition of CCN and PepysCCNxCon2012: Session 3: Juxtaposition of CCN and Pepys
CCNxCon2012: Session 3: Juxtaposition of CCN and Pepys
 
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN Networks
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN NetworksCCNxCon2012: Session 4: Disjoint Path Discovery in CCN Networks
CCNxCon2012: Session 4: Disjoint Path Discovery in CCN Networks
 
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issues
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issuesCCNxCon2012: Session 3: Content-centric VANETs: routing and transport issues
CCNxCon2012: Session 3: Content-centric VANETs: routing and transport issues
 
CCNxCon2012: Session 1: CCN Updates & Roadmap
CCNxCon2012: Session 1: CCN Updates &  RoadmapCCNxCon2012: Session 1: CCN Updates &  Roadmap
CCNxCon2012: Session 1: CCN Updates & Roadmap
 
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...
CCNxCon2012: Session 2: A Content-Centric Approach for Requesting and Dissemi...
 
CCNxCon2012: Session 4: Caesar: a Content Router for High Speed Forwarding
CCNxCon2012: Session 4: Caesar:  a Content Router for High Speed ForwardingCCNxCon2012: Session 4: Caesar:  a Content Router for High Speed Forwarding
CCNxCon2012: Session 4: Caesar: a Content Router for High Speed Forwarding
 
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...
CCNxCon2012: Session 2: A Distributed Server-based Conference Control and Man...
 
CCNxCon2012: Session 2: DASH over CCN: A CCN Use-Case for a SocialMedia Base...
CCNxCon2012: Session 2: DASH over CCN:  A CCN Use-Case for a SocialMedia Base...CCNxCon2012: Session 2: DASH over CCN:  A CCN Use-Case for a SocialMedia Base...
CCNxCon2012: Session 2: DASH over CCN: A CCN Use-Case for a SocialMedia Base...
 
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R Networks
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R NetworksCCNxCon2012: Session 3: NDN Applicability to V2V and V2R Networks
CCNxCon2012: Session 3: NDN Applicability to V2V and V2R Networks
 
CCNxCon2012: Welcome: Event Kickoff & Opening Remarks
CCNxCon2012: Welcome: Event Kickoff & Opening RemarksCCNxCon2012: Welcome: Event Kickoff & Opening Remarks
CCNxCon2012: Welcome: Event Kickoff & Opening Remarks
 
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...
Enterprise Gamification – Exploiting People by Letting Them Have Fun [PARC Fo...
 

Similar to CCNxCon2012: Session 5: Object Sizes in Named Data Networking

M6d cassandrapresentation
M6d cassandrapresentationM6d cassandrapresentation
M6d cassandrapresentationEdward Capriolo
 
The Power of Determinism in Database Systems
The Power of Determinism in Database SystemsThe Power of Determinism in Database Systems
The Power of Determinism in Database SystemsDaniel Abadi
 
Data Lake and the rise of the microservices
Data Lake and the rise of the microservicesData Lake and the rise of the microservices
Data Lake and the rise of the microservicesBigstep
 
Scalability20140226
Scalability20140226Scalability20140226
Scalability20140226Nick Kypreos
 
Challenges and Opportunities of Big Data Genomics
Challenges and Opportunities of Big Data GenomicsChallenges and Opportunities of Big Data Genomics
Challenges and Opportunities of Big Data GenomicsYasin Memari
 
Evaluating UCIe based multi-die SoC to meet timing and power
Evaluating UCIe based multi-die SoC to meet timing and power Evaluating UCIe based multi-die SoC to meet timing and power
Evaluating UCIe based multi-die SoC to meet timing and power Deepak Shankar
 
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedInJay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedInLinkedIn
 
Cosmos DB at VLDB 2019
Cosmos DB at VLDB 2019Cosmos DB at VLDB 2019
Cosmos DB at VLDB 2019Dharma Shukla
 
Building Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesBuilding Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesDavid Martínez Rego
 
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...Bob Pusateri
 
OSI reference model
OSI reference modelOSI reference model
OSI reference modelshanthishyam
 
designing distributed scalable and reliable systems
designing distributed scalable and reliable systemsdesigning distributed scalable and reliable systems
designing distributed scalable and reliable systemsMauro Servienti
 
Scaling Open Source Big Data Cloud Applications is Easy/Hard
Scaling Open Source Big Data Cloud Applications is Easy/HardScaling Open Source Big Data Cloud Applications is Easy/Hard
Scaling Open Source Big Data Cloud Applications is Easy/HardPaul Brebner
 
John adams talk cloudy
John adams   talk cloudyJohn adams   talk cloudy
John adams talk cloudyJohn Adams
 
Notes on a High-Performance JSON Protocol
Notes on a High-Performance JSON ProtocolNotes on a High-Performance JSON Protocol
Notes on a High-Performance JSON ProtocolDaniel Austin
 
Energy Aware performance evaluation of WSNs.
Energy Aware performance evaluation of WSNs.Energy Aware performance evaluation of WSNs.
Energy Aware performance evaluation of WSNs.ikrrish
 
Presentation l`aquila new
Presentation l`aquila newPresentation l`aquila new
Presentation l`aquila newikrrish
 
Cloud interconnection networks basic .pptx
Cloud interconnection networks basic .pptxCloud interconnection networks basic .pptx
Cloud interconnection networks basic .pptxRahulBhole12
 

Similar to CCNxCon2012: Session 5: Object Sizes in Named Data Networking (20)

M6d cassandrapresentation
M6d cassandrapresentationM6d cassandrapresentation
M6d cassandrapresentation
 
The Power of Determinism in Database Systems
The Power of Determinism in Database SystemsThe Power of Determinism in Database Systems
The Power of Determinism in Database Systems
 
Data Lake and the rise of the microservices
Data Lake and the rise of the microservicesData Lake and the rise of the microservices
Data Lake and the rise of the microservices
 
Scalability20140226
Scalability20140226Scalability20140226
Scalability20140226
 
Challenges and Opportunities of Big Data Genomics
Challenges and Opportunities of Big Data GenomicsChallenges and Opportunities of Big Data Genomics
Challenges and Opportunities of Big Data Genomics
 
Evaluating UCIe based multi-die SoC to meet timing and power
Evaluating UCIe based multi-die SoC to meet timing and power Evaluating UCIe based multi-die SoC to meet timing and power
Evaluating UCIe based multi-die SoC to meet timing and power
 
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedInJay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
 
Storm at Forter
Storm at ForterStorm at Forter
Storm at Forter
 
Cosmos DB at VLDB 2019
Cosmos DB at VLDB 2019Cosmos DB at VLDB 2019
Cosmos DB at VLDB 2019
 
Building Big Data Streaming Architectures
Building Big Data Streaming ArchitecturesBuilding Big Data Streaming Architectures
Building Big Data Streaming Architectures
 
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...
Select Stars: A DBA's Guide to Azure Cosmos DB (Chicago Suburban SQL Server U...
 
OSI reference model
OSI reference modelOSI reference model
OSI reference model
 
designing distributed scalable and reliable systems
designing distributed scalable and reliable systemsdesigning distributed scalable and reliable systems
designing distributed scalable and reliable systems
 
Scaling Open Source Big Data Cloud Applications is Easy/Hard
Scaling Open Source Big Data Cloud Applications is Easy/HardScaling Open Source Big Data Cloud Applications is Easy/Hard
Scaling Open Source Big Data Cloud Applications is Easy/Hard
 
What is 3d torus
What is 3d torusWhat is 3d torus
What is 3d torus
 
John adams talk cloudy
John adams   talk cloudyJohn adams   talk cloudy
John adams talk cloudy
 
Notes on a High-Performance JSON Protocol
Notes on a High-Performance JSON ProtocolNotes on a High-Performance JSON Protocol
Notes on a High-Performance JSON Protocol
 
Energy Aware performance evaluation of WSNs.
Energy Aware performance evaluation of WSNs.Energy Aware performance evaluation of WSNs.
Energy Aware performance evaluation of WSNs.
 
Presentation l`aquila new
Presentation l`aquila newPresentation l`aquila new
Presentation l`aquila new
 
Cloud interconnection networks basic .pptx
Cloud interconnection networks basic .pptxCloud interconnection networks basic .pptx
Cloud interconnection networks basic .pptx
 

More from PARC, a Xerox company

CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCN
CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCNCCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCN
CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCNPARC, a Xerox company
 
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...PARC, a Xerox company
 
CCNxCon2012: Poster Session: FIB Optimizations in CCN
CCNxCon2012: Poster Session: FIB Optimizations in CCNCCNxCon2012: Poster Session: FIB Optimizations in CCN
CCNxCon2012: Poster Session: FIB Optimizations in CCNPARC, a Xerox company
 
CCNxCon2012: Poster Session: Cache Coordination in a Hierarchical
CCNxCon2012: Poster Session: Cache Coordination in a HierarchicalCCNxCon2012: Poster Session: Cache Coordination in a Hierarchical
CCNxCon2012: Poster Session: Cache Coordination in a HierarchicalPARC, a Xerox company
 
CCNxCon2012: Poster Session: Live Streaming with Content Centric Networking
CCNxCon2012: Poster Session: Live Streaming with Content Centric NetworkingCCNxCon2012: Poster Session: Live Streaming with Content Centric Networking
CCNxCon2012: Poster Session: Live Streaming with Content Centric NetworkingPARC, a Xerox company
 
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...PARC, a Xerox company
 
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...PARC, a Xerox company
 
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...PARC, a Xerox company
 
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...PARC, a Xerox company
 
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric Networking
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric NetworkingCCNxCon2012: Session 5: Interest Rate Control for Content-Centric Networking
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric NetworkingPARC, a Xerox company
 
CCNxCon2012: Session 5: A Distributed PIT Table
CCNxCon2012: Session 5: A Distributed PIT TableCCNxCon2012: Session 5: A Distributed PIT Table
CCNxCon2012: Session 5: A Distributed PIT TablePARC, a Xerox company
 
CCNxCon2012: Session 5: Denial of Service Attacks Evaluation
CCNxCon2012: Session 5: Denial of Service Attacks EvaluationCCNxCon2012: Session 5: Denial of Service Attacks Evaluation
CCNxCon2012: Session 5: Denial of Service Attacks EvaluationPARC, a Xerox company
 
CCNxCon2012: Session 5: CCN Location Sharing System
CCNxCon2012: Session 5: CCN Location Sharing SystemCCNxCon2012: Session 5: CCN Location Sharing System
CCNxCon2012: Session 5: CCN Location Sharing SystemPARC, a Xerox company
 
CCNxCon2012: Session 5: CCNx in Every Sensor
CCNxCon2012: Session 5: CCNx in Every SensorCCNxCon2012: Session 5: CCNx in Every Sensor
CCNxCon2012: Session 5: CCNx in Every SensorPARC, a Xerox company
 
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLab
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLabCCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLab
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLabPARC, a Xerox company
 
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...PARC, a Xerox company
 
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCN
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCNCCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCN
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCNPARC, a Xerox company
 

More from PARC, a Xerox company (18)

CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCN
CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCNCCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCN
CCNxCon2012: Session 2: Embedding Cloud-Centric-Networking in CCN
 
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...
CCNxCon2012: Session 2: Network Management Framework for Future Internet Scen...
 
CCNxCon2012: Poster Session: FIB Optimizations in CCN
CCNxCon2012: Poster Session: FIB Optimizations in CCNCCNxCon2012: Poster Session: FIB Optimizations in CCN
CCNxCon2012: Poster Session: FIB Optimizations in CCN
 
CCNxCon2012: Poster Session: Cache Coordination in a Hierarchical
CCNxCon2012: Poster Session: Cache Coordination in a HierarchicalCCNxCon2012: Poster Session: Cache Coordination in a Hierarchical
CCNxCon2012: Poster Session: Cache Coordination in a Hierarchical
 
CCNxCon2012: Poster Session: Live Streaming with Content Centric Networking
CCNxCon2012: Poster Session: Live Streaming with Content Centric NetworkingCCNxCon2012: Poster Session: Live Streaming with Content Centric Networking
CCNxCon2012: Poster Session: Live Streaming with Content Centric Networking
 
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...
CCNxCon2012: Poster Session:On a Novel Joint Replicating and Caching Strategy...
 
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...
CCNxCon2012: Poster Session: Parallelizing FIB Lookup in Content-Centric Netw...
 
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...
CCNxCon2012: Poster Session: ICN Architecture Evaluation — A Discussion on CC...
 
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...
CCNxCon2012: Poster Session: A Backward-Compatible CCNx Extension for Improve...
 
CCNxCon2012: Session 4: OSPFN
CCNxCon2012: Session 4: OSPFNCCNxCon2012: Session 4: OSPFN
CCNxCon2012: Session 4: OSPFN
 
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric Networking
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric NetworkingCCNxCon2012: Session 5: Interest Rate Control for Content-Centric Networking
CCNxCon2012: Session 5: Interest Rate Control for Content-Centric Networking
 
CCNxCon2012: Session 5: A Distributed PIT Table
CCNxCon2012: Session 5: A Distributed PIT TableCCNxCon2012: Session 5: A Distributed PIT Table
CCNxCon2012: Session 5: A Distributed PIT Table
 
CCNxCon2012: Session 5: Denial of Service Attacks Evaluation
CCNxCon2012: Session 5: Denial of Service Attacks EvaluationCCNxCon2012: Session 5: Denial of Service Attacks Evaluation
CCNxCon2012: Session 5: Denial of Service Attacks Evaluation
 
CCNxCon2012: Session 5: CCN Location Sharing System
CCNxCon2012: Session 5: CCN Location Sharing SystemCCNxCon2012: Session 5: CCN Location Sharing System
CCNxCon2012: Session 5: CCN Location Sharing System
 
CCNxCon2012: Session 5: CCNx in Every Sensor
CCNxCon2012: Session 5: CCNx in Every SensorCCNxCon2012: Session 5: CCNx in Every Sensor
CCNxCon2012: Session 5: CCNx in Every Sensor
 
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLab
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLabCCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLab
CCNxCon2012: Session 5: Easy CCNx experimentation on PlanetLab
 
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...
CCNxCon2012: Session 5: CCN support for Information-Centric Opportunistic Net...
 
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCN
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCNCCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCN
CCNxCon2012: Session 5: Distributed Cooperative Caching Scheme in CCN
 

CCNxCon2012: Session 5: Object Sizes in Named Data Networking

  • 1. Ashok Narayanan Dave Oran Won So Naveen Nathan (UCI/Cisco) Cisco
  • 2. !  Applications define objects of arbitrary size !  Therefore, NDN needs to be able to cache and deliver objects of arbitrary size •  The extreme case: a stream, which is infinite in size •  More realistic: a maximum object size that will serve the vast majority of common applications !  This problem needs to be solved between: •  The naming convention •  The network protocol •  The application
  • 3. !  Natural link MTUs are small !  There’s a gap between natural MTU sizes and natural object sizes !  There are five options to bridge this gap 1.  Applications must size object to network MTU 2.  Rely on lower layer fragmentation & reassembly 3.  Naming convention to identify fragments 4.  Publisher-authored manifest to list fragments 5.  In-network fragmentation within NDN !  The truth lies in some combination of these
  • 4. !  What are “natural object sizes”? •  Hard to tell, but some estimates exist… !  Web pages average: 16KB per page, 120KB for CSS/scripts etc !  Pictures average: 100KB per uploaded photo, 2MB per stored photo !  Video: 500KB-15MB per ABR video chunk !  Email: 75KB avg, 22KB-400KB spread !  Document per page: PDF 62KB, .DOC 25KB
  • 5. !  Object-based formats cannot support an infinite object size •  It’s basically a question of picking a maximum object size !  It’s unreasonable for applications to size objects to the smallest possible MTU •  Applications don’t know which link the data will traverse !  Relying on underlying fragmentation is fragile •  IP fragmentation suffers from packet loss amplification & reassembly delays (3x for CCNx) •  Other lower layer transports may offer no fragmentation
  • 6. !  CCNxcurrently relies on (2+3) – naming convention for 4KB objects, and lower layer fragmentation to carry these •  Requires fragmentation features in lower layer •  Architectural early binding to small but MTU- independent chunk size •  Imposes restrictions on naming !  Manifest-based schemes slightly better (??) •  Publisher selects private scheme for chunk naming •  Object fetch retrieves a manifest of chunk names •  Still requires MTU-independent fragment size
  • 7. ! IfCCNx is truly to operate on any link layer, fragmentation needs to be built into the CCNx layer •  Can’t rely on lower layer to support fragmentation •  Can’t size fragments to MTU at publication ! Given that the CCNx layer can fragment and reassemble, how far should it go?
  • 8. ! Publishers sign the complete object ! Any node can (re-)fragment at will ! Each fragment should be completely identifiable as part of its object ! Fragment cut-though forwarding without reassembly •  Critical to remove latency penalty
  • 9. Fragment-Info ContentSize Signature Signature FragmentOffset Name Name FragmentSize Signed-Info Signed-Info Key (Name) Data … Fragment-Info Data … Data … Fragment-Info … Data
  • 10. ! Fragments match a pending interest •  … but don’t satisfy it ! Node immediately forwards fragments towards matching PIT entry face(s) ! Node keeps track of fragment set in PIT •  Consume entry once all fragments are forwarded ! End-hostconsumes reassembled object ! Cache delivers whole objects •  Physical reassembly not actually required
  • 11. !  Flow balance is changed •  Significant issue for large objects •  Requires a per-hop congestion control schemes which can handle variant object sizes •  We believe an appropriate scheme exists !  Ack/Nack scheme required? •  Optional if lower layer is reliable •  Nack: Add a “subrange fetch” field to the Interest? •  Reliability can be end-to-end or hop-by-hop •  Degenerate solution: small objects, no repair
  • 12. !  Individual fragments cannot be authenticated in the network •  Complete object can be authenticated today •  Reassemble and authenticate only at key points? •  We’re investigating whether this can be solved !  End-to-end flow control is altered •  Can deliver (a lot of) data towards a dead or uninterested client •  Depending on maximum size of object, a “cancel interest” message may be required •  We’re investigating in which cases this is needed
  • 13. !  What runs above CCNx fragmentation? 1.  Name-based object chunking 2.  Manifest-based sub-object chunking 3.  Nothing (Application-level chunking) !  Choice of 1 vs 2 depends on palatability of naming conventions for fragmentation !  As the maximum supported object size increases, option 3 is more viable •  But this only makes sense if the aforementioned problems are brought under control •  We’ll experiment with our implementation to determine a good maximum CCNx object size, and a chunking strategy !  Beyondreplacing IP fragmentation with CCNx fragmentation, we are not yet ready to make a recommendation.
  • 14.
  • 15. ! Implemented this fragmentation scheme in ccnx codebase •  Initial demo – fragmentation, reassembly, midpoint cut-through forwarding ! Further investigation: •  Congestion control •  Hop-by-hop reliability •  Flow control •  Different maximum object sizes
  • 16. R1 R2 R3 R4 R5 R6 Source Sink (/src/test) •  R1, R2, …, R6 run ccnd modified to support Link Fragmentation. •  Sink node runs: ccnget /src/test •  Source node runs: ccnput /src/test < test16KB (payload is 16000B; generates ~16KB ccnb-encoded packet) •  ccnd connect using UDP over Ethernet (1500 MTU) - Accounting for IP/UDP headers and trailing Sequence packets, UDP face MTU is 1500-20-8-9 = 1463
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22. ! Individualfragments should be forwarded independent of others ! Per-hop reassembly can incur significant delays
  • 23. !  Nodes keep track of received and transmitted fragments in the PIT entry !  It is therefore possible to discover loss of fragments between nodes and re-request delivery !  Packet loss timers for hop-by-hop reliability are lower than for end-to-end reliability •  Loss timers are a multiple of maximum path RTT •  Hop-by-hop reliability timers can be 30ms where end-to-end path reliability timers are ~500ms