[Ndn-interest] any comments on naming convention?

Ignacio.Solis at parc.com Ignacio.Solis at parc.com
Sat Sep 27 01:03:37 PDT 2014

On 9/26/14, 5:01 PM, "Lan Wang (lanwang)" <lanwang at memphis.edu> wrote:

>On Sep 25, 2014, at 4:09 PM, <Marc.Mosko at parc.com>
> wrote:
>> On Sep 25, 2014, at 10:01 PM, Lan Wang (lanwang) <lanwang at memphis.edu>
>>>>> - Benefit seems apparent in multi-consumer scenarios, even without
>>>>> Let's say I have 5 personal devices requesting mail. In Scheme B,
>>>>> publisher receives and processes 5 interests per second on average.
>>>>> Scheme A, with an upstream caching node, each receives 1 per second
>>>>> maximum. The publisher  still has to throttle requests, but with no
>>>>> or scaling support from the network.
>>>> This can be done without selectors.  As long as all the clients
>>>>produce a
>>>> request for the same name they can take advantage caching.
>>> What Jeff responded to is that scheme B requires a freshness of 0 for
>>>the initial interest to get to the producer (in order to get the latest
>>>list of email names).  If freshness is 0, then there's no caching of
>>>the data.  No meter how the clients name their Interests, they can't
>>>take advantage of caching.
>> How do selectors prevent you from sending an Interest to the producer,
>>if it¹s connected.  I send first interest ³exclude <= 100² and cache A
>>responds with version 110.  Don¹t you then turn around and send a second
>>interest ³exclude <= 110² to see if another cache has a more recent
>>version?  Won¹t that interest go to the producer, if its connected?  It
>>will then need to send a NACk (or you need to timeout), if there¹s
>>nothing more recent.
>> Using selectors, you still never know if there¹s a more recent version
>>until you get to the producer or you timeout.  You always need to keep
>>asking and asking.  Also, there¹s nothing special about the content
>>object from the producer, so you still don¹t necessarily believe that
>>its the most recent, and you¹ll then ask again.  Sure, an application
>>could just accept the 1st or 2nd content object it gets back, but it
>>never really knows.  Sure, if the CreateTime (I think you call it
>>Timestamp in NDN, if you still have it) is very recent and you assume
>>synchronized clocks, then you might have some belief that it¹s current.
>> We could also talk about FreshnessSeconds and MustBeFresh, but that
>>would be best to start its own thread on.
>First of all, I'm not saying selectors prevent you from sending an
>Interest to the producer.   Jeff's example is when you have five devices
>all wanting to get your emails, then the caching of the Data packet that
>contains the list of emails in Scheme A helps reduce the load on the
>producer.  No matter how many devices want to get the list and when they
>send their Interest, the load on the server is constant (at most one
>Interest for the email list per second in the example).  But in Scheme B,
>in worst case, the producer can get 5 Interests per second.

First, the worst case is the worst case in _any_ scheme.  There is no
guarantee that any node will cache any data object.

Second, the assumption that every interest must go to the server depends
on the protocol/naming scheme you are using.

Example 1:

For example, say I name the data:


This means that I can programmatically generate a name that will get me
the ³latest² data with a 1 minute window.  If multiple clients request at
the same time, they get the same object, no load on server.

If nodes requests data at different time, then they would request
different objects.  In this case, the protocol/naming scheme is forcing
the 1 minute window of naming.

Assuming always-caching, the server would have at most one request per

Example 2:

Server publishes an object called /mailbox/list/latest with a lifetime of
1 minute.   

Clients would issue requests for this object.  If clients are requesting
at the same time, they would get the same object. If clients are not
requesting at the same time, they may get different objects.

This method limits the load on the server, again, assuming the network is

Both of those methods limit the frequency at which the server produces
data.  Because of this limit, clients are not able to force a request that
is dynamically generated for them at that point in time. Both examples
have a probability for that to happen though.

This is a big limitation on the client. It can¹t query dynamically.

But wait, there¹s more!  (that¹s a joke, for those not familiar with
American TV commercials)

If you want to allow clients to get to the server for fresh data, then we
are increasing the load on the server, independently of whether you use
selectors or not. 

Other approaches might include:

Example 3:
- A mixture of Example 1 and Example 2.

Example 4:
- A mixture of Example 1 (or/and 2) and a dynamic query mechanism.
Note that while selectors might be able to get you some of the dynamic
answers, they would have a similar probability than getting the cached
answers from example 1 and 2, and they would still have to make a choice
of whether to ask the server dynamically.  After all, if it didn¹t want to
ask dynamically, we would be satisfied with example 1 or 2.

>Second, with or without selectors, you need to keep asking since you
>never know when new emails will arrive and the list will change.  With
>any design, you need to keep asking.  The question is how often to ask.
>The user may be happy to get his new emails once every 10 minutes.  You
>can ask for a new list every 10 minutes.  If you get a list from
>somewhere (if it was cached, it must have been less than 1 second old if
>FreshnessSecond of 1 second was used; if not, it must have been generated
>by the server) or get a NACK from the server, you may stop asking.  If
>the user insists getting new emails as soon as possible, then the email
>client can send an Interest, say, every minute.  This serves as a pending
>Interest at the server, and the server will respond whenever there's a
>new list.  This pending Interest needs to be refreshed whenever it times
>out (every minute in this example).  This is similar to sync.

I don¹t think you should model your protocols assuming PIT entries will
live that long.  It may be true that for a small network you might get 1
minute of PIT lifetime, but for a real network this won¹t be the case.  It
will be very expensive to get PIT lifetimes across the internet that are
more than a few seconds. (Think of routers running a few 100 gig
interfaces, the size of pit entries and the amount of memory required).


More information about the Ndn-interest mailing list