From shijunxiao at email.arizona.edu Mon Mar 3 08:45:08 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Mon, 3 Mar 2014 09:45:08 -0700 Subject: [Nfd-dev] NFD UML new location Message-ID: Hi NFD developers NFD UML has a new canonical URI: http://named-data.net/doc/nfd-uml/ Please reference this document instead of the copy hosted on IRL (which becomes inaccessible yesterday). Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Mar 3 08:48:10 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 3 Mar 2014 16:48:10 +0000 Subject: [Nfd-dev] NFD UML new location In-Reply-To: Message-ID: Thank you. Is it possible that such documents could be versioned somehow? ie, doc/nfd-uml// with a symlink to doc/nfd-uml/current/ Jeff From: Junxiao Shi > Date: Mon, 3 Mar 2014 09:45:08 -0700 To: > Subject: [Nfd-dev] NFD UML new location Hi NFD developers NFD UML has a new canonical URI: http://named-data.net/doc/nfd-uml/ Please reference this document instead of the copy hosted on IRL (which becomes inaccessible yesterday). Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Mar 3 08:49:27 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 3 Mar 2014 16:49:27 +0000 Subject: [Nfd-dev] NFD UML new location In-Reply-To: Message-ID: Also, just a quick question in looking at the first page of this again... Why is the namespace "ndn" instead of "nfd"? Seems like the latter is more descriptive and just as short. Jeff From: Junxiao Shi > Date: Mon, 3 Mar 2014 09:45:08 -0700 To: > Subject: [Nfd-dev] NFD UML new location Hi NFD developers NFD UML has a new canonical URI: http://named-data.net/doc/nfd-uml/ Please reference this document instead of the copy hosted on IRL (which becomes inaccessible yesterday). Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Mon Mar 3 08:52:59 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Mon, 3 Mar 2014 16:52:59 +0000 Subject: [Nfd-dev] NFD UML new location In-Reply-To: References: Message-ID: I haven't updated UML to reflect the actual code and changes we made during the development. It need to be changed to nfd when I do the update. I can upload/publish the source for these diagrams (it is Visual Paradigm). --- Alex On Mar 3, 2014, at 4:49 PM, Burke, Jeff wrote: > > Also, just a quick question in looking at the first page of this again... > Why is the namespace "ndn" instead of "nfd"? Seems like the latter is more descriptive and just as short. > > Jeff > > > From: Junxiao Shi > Date: Mon, 3 Mar 2014 09:45:08 -0700 > To: > Subject: [Nfd-dev] NFD UML new location > > Hi NFD developers > > NFD UML has a new canonical URI: http://named-data.net/doc/nfd-uml/ > Please reference this document instead of the copy hosted on IRL (which becomes inaccessible yesterday). > > Yours, Junxiao > _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Mon Mar 3 15:45:56 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Mon, 3 Mar 2014 16:45:56 -0700 Subject: [Nfd-dev] NFD new unit testing guidelines Message-ID: Hi NFD developers There are some changes in unit testing: - Tests should be in nfd::tests namespace. This avoids potential name conflicts with main program. - Test fixtures should derive from nfd::tests::BaseFixture. Test cases should use a fixture derived from BaseFixture. This ensures global io_service is correctly created and destroyed for each test case. - LimitedIo class is provided for IO count and time limits. Please read details on http://redmine.named-data.net/projects/nfd/wiki/UnitTesting Code reviewers will enforce these requirements. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Wed Mar 5 16:58:08 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Thu, 6 Mar 2014 00:58:08 +0000 Subject: [Nfd-dev] FW: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: Message-ID: From: "Thompson, Jeff" > Date: Thu, 6 Mar 2014 00:47:07 +0000 To: NDN Lib > Subject: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh Hello. In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh Is this change to default behavior intentional? Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Mar 5 17:32:41 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 05 Mar 2014 18:32:41 -0700 Subject: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: References: Message-ID: <00981d5b-95c7-484d-9bed-76817171a7bc@email.android.com> Hi JeffT Yes, this definition is intentional. It?s okay for the library to specify MustBeFresh=true by default. Yours, Junxiao "Thompson, Jeff" wrote: >Hello. > >In a CCNx interest, if the AnswerOriginKind interest selector was >absent, the forwarder would treat it as allow stale = False. But in >the new TLV format, if the MustBeFresh selector is absent the spec says >the default is to allow stale = True. >http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh > >Is this change to default behavior intentional? > >Thanks, >- Jeff T > > >------------------------------------------------------------------------ > >_______________________________________________ >Ndn-lib mailing list >Ndn-lib at lists.cs.ucla.edu >http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefft0 at remap.ucla.edu Wed Mar 5 18:05:23 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Thu, 6 Mar 2014 02:05:23 +0000 Subject: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: <00981d5b-95c7-484d-9bed-76817171a7bc@email.android.com> References: <00981d5b-95c7-484d-9bed-76817171a7bc@email.android.com> Message-ID: Thanks for the reply. Perhaps this should be noted in the "Changes from CCNx" section. http://named-data.net/doc/ndn-tlv/interest.html#changes-from-ccnx Thanks, - Jeff T From: Junxiao Shi > Date: Wednesday, March 5, 2014 5:32 PM To: Jeff Thompson >, NDN Lib > Cc: nfd-dev > Subject: Re: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh Hi JeffT Yes, this definition is intentional. It?s okay for the library to specify MustBeFresh=true by default. Yours, Junxiao "Thompson, Jeff" > wrote: Hello. In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh Is this change to default behavior intentional? Thanks, - Jeff T ________________________________ Ndn-lib mailing list Ndn-lib at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Fri Mar 7 02:14:20 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 7 Mar 2014 10:14:20 +0000 Subject: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: References: <00981d5b-95c7-484d-9bed-76817171a7bc@email.android.com> Message-ID: I have updated the spec, adding the suggested blurb (I put it before in ndnd-tlv redmine wiki, but forgot to propagate in other places). --- Alex On Mar 6, 2014, at 2:05 AM, Thompson, Jeff wrote: > Thanks for the reply. > > Perhaps this should be noted in the "Changes from CCNx" section. > http://named-data.net/doc/ndn-tlv/interest.html#changes-from-ccnx > > Thanks, > - Jeff T > > From: Junxiao Shi > Date: Wednesday, March 5, 2014 5:32 PM > To: Jeff Thompson , NDN Lib > Cc: nfd-dev > Subject: Re: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh > > Hi JeffT > Yes, this definition is intentional. > It?s okay for the library to specify MustBeFresh=true by default. > Yours, Junxiao > > > "Thompson, Jeff" wrote: >> >> Hello. >> >> In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. >> http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh >> >> Is this change to default behavior intentional? >> >> Thanks, >> - Jeff T >> >> >> Ndn-lib mailing list >> Ndn-lib at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethanhuang1991 at gmail.com Fri Mar 7 10:09:06 2014 From: ethanhuang1991 at gmail.com (Yi Huang) Date: Fri, 7 Mar 2014 11:09:06 -0700 Subject: [Nfd-dev] problem connecting Message-ID: Hello, I fresh installed NFD on two VMs which are connected via ethernet. However, I had difficulty connecting from one VM to the other using nfd. On VM1: ltr120 at traffic-gen-vm1:~/repo/NFD$ sudo nfd& [1] 17029 ltr120 at traffic-gen-vm1:~/repo/NFD$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:b8:a7:ef inet addr:192.168.254.15 Bcast:192.168.254.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb8:a7ef/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:102530 errors:0 dropped:0 overruns:0 frame:0 TX packets:32107 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:127258667 (127.2 MB) TX bytes:2226264 (2.2 MB) eth1 Link encap:Ethernet HWaddr 08:00:27:81:8f:44 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe81:8f44/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1673 errors:0 dropped:0 overruns:0 frame:0 TX packets:1327 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:536141 (536.1 KB) TX bytes:283936 (283.9 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:37 errors:0 dropped:0 overruns:0 frame:0 TX packets:37 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2327 (2.3 KB) TX bytes:2327 (2.3 KB) On VM2: ltr120 at traffic-gen-vm2:~/repo2/NFD$ sudo nfd --tcp-connect "10.0.0.1:6363" --prefix /A ERROR: [Main] remote_endpoint: Transport endpoint is not connected ltr120 at traffic-gen-vm2:~/repo2/NFD$ ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_req=1 ttl=64 time=1.94 ms 64 bytes from 10.0.0.1: icmp_req=2 ttl=64 time=10.6 ms 64 bytes from 10.0.0.1: icmp_req=3 ttl=64 time=8.99 ms 64 bytes from 10.0.0.1: icmp_req=4 ttl=64 time=17.4 ms 64 bytes from 10.0.0.1: icmp_req=5 ttl=64 time=15.4 ms 64 bytes from 10.0.0.1: icmp_req=6 ttl=64 time=274 ms ^C --- 10.0.0.1 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5007ms rtt min/avg/max/mdev = 1.940/54.774/274.250/98.277 ms ltr120 at traffic-gen-vm2:~/repo2/NFD$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:08:1f:29 inet addr:192.168.254.15 Bcast:192.168.254.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe08:1f29/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108373 errors:0 dropped:0 overruns:0 frame:0 TX packets:45645 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:128349277 (128.3 MB) TX bytes:3224278 (3.2 MB) eth1 Link encap:Ethernet HWaddr 08:00:27:a9:c1:00 inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea9:c100/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1337 errors:0 dropped:0 overruns:0 frame:0 TX packets:1693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:284732 (284.7 KB) TX bytes:537745 (537.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:65 errors:0 dropped:0 overruns:0 frame:0 TX packets:65 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3851 (3.8 KB) TX bytes:3851 (3.8 KB) Best, -- Yi Huang, Grad Student Cell-phone: (520)245-3921 Major in Computer Science Honors Program The University of Arizona E-mail: ethanhuang1991 @ gmail . com Tucson, AZ 85705 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Fri Mar 7 10:14:40 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 7 Mar 2014 11:14:40 -0700 Subject: [Nfd-dev] problem connecting In-Reply-To: References: Message-ID: Please run `git log | head -1` on both NFD and ndn-cpp-dev repositories, and report the result. Don't run NFD in background on vm1. Run it in foreground and observe the logs. Run tcpdump on both VMs to see whether TCP connection is established. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefft0 at remap.ucla.edu Fri Mar 7 10:26:07 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Fri, 7 Mar 2014 18:26:07 +0000 Subject: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: References: <00981d5b-95c7-484d-9bed-76817171a7bc@email.android.com> Message-ID: If anyone wants to comment, here is the added text for the changes that applications need to make: Changed default semantics of staleness. Specifically, NDN-TLV Interest without any selectors will bring any data that matches the name, and only when MustBeFresh selector is enabled it will try to honor freshness, specified in Data packets. With Binary XML encoded Interests, the default behavior was to bring ?fresh? data and return ?stale? data only when AnswerOriginKind was set to 3. Application developers must be aware of this change, reexamine the Interest expression code, and enable MustBeFresh selector when necessary. From: Alex Afansyev > Date: Friday, March 7, 2014 2:14 AM To: Jeff Thompson > Cc: Junxiao Shi >, NDN Lib >, nfd-dev > Subject: Re: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh I have updated the spec, adding the suggested blurb (I put it before in ndnd-tlv redmine wiki, but forgot to propagate in other places). --- Alex On Mar 6, 2014, at 2:05 AM, Thompson, Jeff > wrote: Thanks for the reply. Perhaps this should be noted in the "Changes from CCNx" section. http://named-data.net/doc/ndn-tlv/interest.html#changes-from-ccnx Thanks, - Jeff T From: Junxiao Shi > Date: Wednesday, March 5, 2014 5:32 PM To: Jeff Thompson >, NDN Lib > Cc: nfd-dev > Subject: Re: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh Hi JeffT Yes, this definition is intentional. It?s okay for the library to specify MustBeFresh=true by default. Yours, Junxiao "Thompson, Jeff" > wrote: Hello. In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh Is this change to default behavior intentional? Thanks, - Jeff T ________________________________ Ndn-lib mailing list Ndn-lib at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethanhuang1991 at gmail.com Fri Mar 7 10:34:59 2014 From: ethanhuang1991 at gmail.com (Yi Huang) Date: Fri, 7 Mar 2014 11:34:59 -0700 Subject: [Nfd-dev] problem connecting In-Reply-To: References: Message-ID: On VM1: ltr120 at traffic-gen-vm1:~/repo/NFD$ git log|head -1 commit 86fce8ea6e66bdf47ca6469470ed0aa212bbf6e4 ltr120 at traffic-gen-vm1:~/repo/NFD$ cd ../ndn-cpp-dev/ ltr120 at traffic-gen-vm1:~/repo/ndn-cpp-dev$ git log| head -1 commit 110881dd42c66f43e7bbf13c7c79e6f5fe0b2d7f ltr120 at traffic-gen-vm1:~/repo/ndn-cpp-dev$ sudo nfd On VM2: ltr120 at traffic-gen-vm2:~/repo2/NFD$ git log |head -1 commit 86fce8ea6e66bdf47ca6469470ed0aa212bbf6e4 ltr120 at traffic-gen-vm2:~/repo2/NFD$ cd ../ndn-cpp-dev/ ltr120 at traffic-gen-vm2:~/repo2/ndn-cpp-dev$ git log |head -1 commit 110881dd42c66f43e7bbf13c7c79e6f5fe0b2d7f ltr120 at traffic-gen-vm2:~/repo2/ndn-cpp-dev$ sudo nfd --tcp-connect " 10.0.0.1:6363" --prefix /A ERROR: [Main] remote_endpoint: Transport endpoint is not connected ltr120 at traffic-gen-vm2:~/repo2/ndn-cpp-dev$ tcpdump on VM1: ltr120 at traffic-gen-vm1:~$ sudo tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 11:31:58.868235 IP 10.0.0.2.40526 > 10.0.0.1.6363: Flags [S], seq 4011473563, win 14600, options [mss 1460,sackOK,TS val 100010451 ecr 0,nop,wscale 4], length 0 11:31:58.868272 IP 10.0.0.1.6363 > 10.0.0.2.40526: Flags [S.], seq 773555891, ack 4011473564, win 14480, options [mss 1460,sackOK,TS val 100014933 ecr 100010451,nop,wscale 4], length 0 11:31:58.868770 IP 10.0.0.2.40526 > 10.0.0.1.6363: Flags [R], seq 4011473564, win 0, length 0 11:32:03.871806 ARP, Request who-has 10.0.0.2 tell 10.0.0.1, length 28 11:32:03.879225 ARP, Request who-has 10.0.0.1 tell 10.0.0.2, length 28 11:32:03.879240 ARP, Reply 10.0.0.1 is-at 08:00:27:81:8f:44 (oui Unknown), length 28 11:32:04.322138 ARP, Reply 10.0.0.2 is-at 08:00:27:a9:c1:00 (oui Unknown), length 28 tcpdump on VM2: ltr120 at traffic-gen-vm2:~$ sudo tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 11:31:58.884834 IP 10.0.0.2.40526 > 10.0.0.1.6363: Flags [S], seq 4011473563, win 14600, options [mss 1460,sackOK,TS val 100010451 ecr 0,nop,wscale 4], length 0 11:31:58.885615 IP 10.0.0.1.6363 > 10.0.0.2.40526: Flags [S.], seq 773555891, ack 4011473564, win 14480, options [mss 1460,sackOK,TS val 100014933 ecr 100010451,nop,wscale 4], length 0 11:31:58.885636 IP 10.0.0.2.40526 > 10.0.0.1.6363: Flags [R], seq 4011473564, win 0, length 0 11:32:03.896020 ARP, Request who-has 10.0.0.1 tell 10.0.0.2, length 28 11:32:04.338484 ARP, Request who-has 10.0.0.2 tell 10.0.0.1, length 28 11:32:04.338504 ARP, Reply 10.0.0.2 is-at 08:00:27:a9:c1:00 (oui Unknown), length 28 11:32:04.338526 ARP, Reply 10.0.0.1 is-at 08:00:27:81:8f:44 (oui Unknown), length 28 -- Yi Huang, Grad Student Cell-phone: (520)245-3921 Major in Computer Science Honors Program The University of Arizona E-mail: ethanhuang1991 @ gmail . com Tucson, AZ 85705 2014-03-07 11:14 GMT-07:00 Junxiao Shi : > Please run `git log | head -1` on both NFD and ndn-cpp-dev repositories, > and report the result. > > Don't run NFD in background on vm1. Run it in foreground and observe the > logs. > > Run tcpdump on both VMs to see whether TCP connection is established. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefft0 at remap.ucla.edu Fri Mar 7 11:55:01 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Fri, 7 Mar 2014 19:55:01 +0000 Subject: [Nfd-dev] Default for missing AnswerOriginKind vs. MustBeFresh Message-ID: If anyone wants to comment, here is the added text for the changes that applications need to make: Changed default semantics of staleness. Specifically, NDN-TLV Interest without any selectors will bring any data that matches the name, and only when MustBeFresh selector is enabled it will try to honor freshness, specified in Data packets. With Binary XML encoded Interests, the default behavior was to bring ?fresh? data and return ?stale? data only when AnswerOriginKind was set to 3. Application developers must be aware of this change, reexamine the Interest expression code, and enable MustBeFresh selector when necessary. From: Alex Afansyev > Date: Friday, March 7, 2014 2:14 AM To: Jeff Thompson > Cc: Junxiao Shi >, NDN Lib >, nfd-dev > Subject: Re: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh I have updated the spec, adding the suggested blurb (I put it before in ndnd-tlv redmine wiki, but forgot to propagate in other places). --- Alex On Mar 6, 2014, at 2:05 AM, Thompson, Jeff > wrote: Thanks for the reply. Perhaps this should be noted in the "Changes from CCNx" section. http://named-data.net/doc/ndn-tlv/interest.html#changes-from-ccnx Thanks, - Jeff T From: Junxiao Shi > Date: Wednesday, March 5, 2014 5:32 PM To: Jeff Thompson >, NDN Lib > Cc: nfd-dev > Subject: Re: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh Hi JeffT Yes, this definition is intentional. It?s okay for the library to specify MustBeFresh=true by default. Yours, Junxiao "Thompson, Jeff" > wrote: Hello. In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh Is this change to default behavior intentional? Thanks, - Jeff T ________________________________ Ndn-lib mailing list Ndn-lib at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Fri Mar 7 16:11:32 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 7 Mar 2014 17:11:32 -0700 Subject: [Nfd-dev] problem connecting In-Reply-To: <5A61E36F-AB50-43FA-B3D6-24C8A91FE2EF@cs.arizona.edu> References: <5A61E36F-AB50-43FA-B3D6-24C8A91FE2EF@cs.arizona.edu> Message-ID: Hi Yi Thanks for reporting. Please track this issue at NFD Bug 1344 . Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Fri Mar 7 18:17:29 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Sat, 8 Mar 2014 02:17:29 +0000 Subject: [Nfd-dev] Default for missing AnswerOriginKind vs. MustBeFresh In-Reply-To: Message-ID: suggested edit to be a little more specific: Changed default semantics of staleness in NDN-TLV. Specifically, an Interest without the MustBeFresh selector will bring any data that matches the Interest, regardless of freshness. (This is a change of the default behavior.) Only when the MustBeFresh selector is true will it honor the FreshnessPeriod specified in Data packets. With Binary XML encoded Interests, the default behavior had been to bring ?fresh? data and return ?stale? data only when AnswerOriginKind was set to 3. Application developers must be aware of this change, and if necessary change Interest expression code to enable the MustBeFresh selector. From: "Thompson, Jeff" > Date: Fri, 7 Mar 2014 19:55:01 +0000 To: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Default for missing AnswerOriginKind vs. MustBeFresh If anyone wants to comment, here is the added text for the changes that applications need to make: Changed default semantics of staleness. Specifically, NDN-TLV Interest without any selectors will bring any data that matches the name, and only when MustBeFresh selector is enabled it will try to honor freshness, specified in Data packets. With Binary XML encoded Interests, the default behavior was to bring ?fresh? data and return ?stale? data only when AnswerOriginKind was set to 3. Application developers must be aware of this change, reexamine the Interest expression code, and enable MustBeFresh selector when necessary. From: Alex Afansyev > Date: Friday, March 7, 2014 2:14 AM To: Jeff Thompson > Cc: Junxiao Shi >, NDN Lib >, nfd-dev > Subject: Re: [Nfd-dev] [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh I have updated the spec, adding the suggested blurb (I put it before in ndnd-tlv redmine wiki, but forgot to propagate in other places). --- Alex On Mar 6, 2014, at 2:05 AM, Thompson, Jeff > wrote: Thanks for the reply. Perhaps this should be noted in the "Changes from CCNx" section. http://named-data.net/doc/ndn-tlv/interest.html#changes-from-ccnx Thanks, - Jeff T From: Junxiao Shi > Date: Wednesday, March 5, 2014 5:32 PM To: Jeff Thompson >, NDN Lib > Cc: nfd-dev > Subject: Re: [Ndn-lib] Default for missing AnswerOriginKind vs. MustBeFresh Hi JeffT Yes, this definition is intentional. It?s okay for the library to specify MustBeFresh=true by default. Yours, Junxiao "Thompson, Jeff" > wrote: Hello. In a CCNx interest, if the AnswerOriginKind interest selector was absent, the forwarder would treat it as allow stale = False. But in the new TLV format, if the MustBeFresh selector is absent the spec says the default is to allow stale = True. http://named-data.net/doc/ndn-tlv/interest.html#mustbefresh Is this change to default behavior intentional? Thanks, - Jeff T ________________________________ Ndn-lib mailing list Ndn-lib at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-lib _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethanhuang1991 at gmail.com Sat Mar 8 13:29:16 2014 From: ethanhuang1991 at gmail.com (Yi Huang) Date: Sat, 8 Mar 2014 14:29:16 -0700 Subject: [Nfd-dev] Integrating Test for NFD Message-ID: Hi, I've set up a simple integrating test environment for NFD. But there is one thing I am not sure. Should the integrating test always produce a pass/fail result? I am asking because the app actually produces a report of percentage data loss and percentage interest loss. Is the report sufficient or I should parse the report and gives a pass/fail result? Best, -- Yi Huang, Grad Student Cell-phone: (520)245-3921 Major in Computer Science Honors Program The University of Arizona E-mail: ethanhuang1991 @ gmail . com Tucson, AZ 85705 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Sat Mar 8 13:54:25 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Sat, 8 Mar 2014 14:54:25 -0700 Subject: [Nfd-dev] Integrating Test for NFD In-Reply-To: References: Message-ID: <3DBDA475-EE2D-4844-A5D7-5A8F8F2A35F4@cs.arizona.edu> start with detailed statistics and analyze the results; once we got more experience, we can label simple pass/fail on each test. Beichuan On Mar 8, 2014, at 2:29 PM, Yi Huang wrote: > Hi, > > I've set up a simple integrating test environment for NFD. But there is one thing I am not sure. Should the integrating test always produce a pass/fail result? I am asking because the app actually produces a report of percentage data loss and percentage interest loss. Is the report sufficient or I should parse the report and gives a pass/fail result? > > Best, > -- > Yi Huang, Grad Student Cell-phone: (520)245-3921 > Major in Computer Science Honors Program > The University of Arizona E-mail: ethanhuang1991 @ gmail . com > Tucson, AZ 85705 > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dibenede at cs.colostate.edu Tue Mar 11 18:04:45 2014 From: dibenede at cs.colostate.edu (Steve DiBenedetto) Date: Tue, 11 Mar 2014 19:04:45 -0600 Subject: [Nfd-dev] config file process section Message-ID: Hi, The configuration file format ( http://redmine.named-data.net/projects/nfd/wiki/ConfigFileFormat) defines a "process" with a "pidfile" field. As of this moment, I am not aware of any module that handles this section. This is a problem because issue 1332 ( http://redmine.named-data.net/issues/1332) will enable config file processing and will (correctly) generate an exception under these circumstances. What module should use the "process" section and what issue should it belong to? Thanks, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Tue Mar 11 19:00:12 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 11 Mar 2014 19:00:12 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: Hi Steve I suggest adding a new free function (not a class method) as a handler of this section, which writes PID to a file. This could be part of task 1332. Yours, Junxiao On Mar 11, 2014 6:05 PM, "Steve DiBenedetto" wrote: > Hi, > > The configuration file format ( > http://redmine.named-data.net/projects/nfd/wiki/ConfigFileFormat) defines > a "process" with a "pidfile" field. As of this moment, I am not aware of > any module that handles this section. This is a problem because issue 1332 ( > http://redmine.named-data.net/issues/1332) will enable config file > processing and will (correctly) generate an exception under these > circumstances. > > What module should use the "process" section and what issue should it > belong to? > > Thanks, > Steve > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Tue Mar 11 19:03:59 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 11 Mar 2014 19:03:59 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: Do we actually need this PID file section at all? To my understanding, PID file is written by whatever system's start up mechanism, e.g., upstart in Ubuntu. --- Alex On Mar 11, 2014, at 7:00 PM, Junxiao Shi wrote: > Hi Steve > > I suggest adding a new free function (not a class method) as a handler of this section, which writes PID to a file. > > This could be part of task 1332. > > Yours, Junxiao > > On Mar 11, 2014 6:05 PM, "Steve DiBenedetto" wrote: > Hi, > > The configuration file format (http://redmine.named-data.net/projects/nfd/wiki/ConfigFileFormat) defines a "process" with a "pidfile" field. As of this moment, I am not aware of any module that handles this section. This is a problem because issue 1332 (http://redmine.named-data.net/issues/1332) will enable config file processing and will (correctly) generate an exception under these circumstances. > > What module should use the "process" section and what issue should it belong to? > > Thanks, > Steve > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Tue Mar 11 19:12:49 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 11 Mar 2014 19:12:49 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: Yes, upstart can obtain PID http://upstart.ubuntu.com/wiki/Stanzas#daemon So we can omit "process" section. On Mar 11, 2014 7:04 PM, "Alex Afanasyev" wrote: > Do we actually need this PID file section at all? To my understanding, > PID file is written by whatever system's start up mechanism, e.g., upstart > in Ubuntu. > > --- > Alex > > On Mar 11, 2014, at 7:00 PM, Junxiao Shi > wrote: > > Hi Steve > > I suggest adding a new free function (not a class method) as a handler of > this section, which writes PID to a file. > > This could be part of task 1332. > > Yours, Junxiao > On Mar 11, 2014 6:05 PM, "Steve DiBenedetto" > wrote: > >> Hi, >> >> The configuration file format ( >> http://redmine.named-data.net/projects/nfd/wiki/ConfigFileFormat) >> defines a "process" with a "pidfile" field. As of this moment, I am not >> aware of any module that handles this section. This is a problem because >> issue 1332 (http://redmine.named-data.net/issues/1332) will enable >> config file processing and will (correctly) generate an exception under >> these circumstances. >> >> What module should use the "process" section and what issue should it >> belong to? >> >> Thanks, >> Steve >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidepesa at gmail.com Tue Mar 11 22:16:29 2014 From: davidepesa at gmail.com (Davide Pesavento) Date: Tue, 11 Mar 2014 22:16:29 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: On Tue, Mar 11, 2014 at 6:04 PM, Steve DiBenedetto wrote: > Hi, > > The configuration file format > (http://redmine.named-data.net/projects/nfd/wiki/ConfigFileFormat) defines a > "process" with a "pidfile" field. As of this moment, I am not aware of any > module that handles this section. This is a problem because issue 1332 > (http://redmine.named-data.net/issues/1332) will enable config file > processing and will (correctly) generate an exception under these > circumstances. > For better forward and backward compatibility, I think we should just print a warning, not throw an exception, upon encountering unknown config sections or fields. -- Davide From shijunxiao at email.arizona.edu Tue Mar 11 22:39:34 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 11 Mar 2014 22:39:34 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: Dear folks I disagree. Nobody reads warnings. An unknown section/field usually means there's a token is misspelled. Continue processing leads to undesired behavior. Yours, Junxiao On Tue, Mar 11, 2014 at 10:16 PM, Davide Pesavento wrote: > > > For better forward and backward compatibility, I think we should just > print a warning, not throw an exception, upon encountering unknown > config sections or fields. > > -- > Davide > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Tue Mar 11 22:45:28 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 11 Mar 2014 22:45:28 -0700 Subject: [Nfd-dev] config file process section In-Reply-To: References: Message-ID: I would agree here with Junxiao. We already made a decision before to treat unknown section as an error, and we should keep the consistency in implementation. For now, we can add a dummy processor for generic section. Actually, word 'general', 'generic', or something else would be better than 'process' --- Alex On Mar 11, 2014, at 10:39 PM, Junxiao Shi wrote: > Dear folks > > I disagree. Nobody reads warnings. > > An unknown section/field usually means there's a token is misspelled. > Continue processing leads to undesired behavior. > > Yours, Junxiao > > On Tue, Mar 11, 2014 at 10:16 PM, Davide Pesavento wrote: > > For better forward and backward compatibility, I think we should just > print a warning, not throw an exception, upon encountering unknown > config sections or fields. > > -- > Davide > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Wed Mar 12 11:30:59 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 12 Mar 2014 11:30:59 -0700 Subject: [Nfd-dev] config file installing Message-ID: <9607DD81-59E7-41F8-8D08-9C5C72164BD6@ucla.edu> I have a question regarding http://redmine.named-data.net/issues/1332. In particular, what is generally accepted way to work with installing configuration files during 'sudo make install' ('sudo ./waf install'). In redmine issue I put that we should check if config file exist and create it if it doesn't and don't do anything if file exists. But now I have some reservations whether this is correct approach or not. I know for sure that all package managers (macports, debian deb, freebsdf) have internal tools to work with configuration files, they will do all the check and (in debian and freebsd) present user a dialog to overwrite, keep, or merge config. I'm kind of hesitant to re-implement this function in wscript. There could be two alternative ways - don't install config and ensure that nfd when started without config file will use default values for everything - overwrite default config file during ./waf install What should we do? --- Alex From shijunxiao at email.arizona.edu Wed Mar 12 11:42:45 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 12 Mar 2014 11:42:45 -0700 Subject: [Nfd-dev] config file installing In-Reply-To: <9607DD81-59E7-41F8-8D08-9C5C72164BD6@ucla.edu> References: <9607DD81-59E7-41F8-8D08-9C5C72164BD6@ucla.edu> Message-ID: <85133e2a-42d8-49c4-86ad-bad2776d64f0@email.android.com> "./waf install" or "make install" is a command to *install* a software. It just needs to install a **sample** configuration file to /etc/ndn/nfd.conf.sample . `nfd` program requires a command line argument for the path to configuration file. User must copy the sample cconfiguration file, and pass its path as the argument. Package has a post-install step, which should: 1. copy the sample to /etc/ndn/nfd.conf , if it doesn?t exist 2. perform a dry-run, to guard against an incompatible configuration 3. setup upstart job that has the configuration file path Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Wed Mar 12 11:46:52 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 12 Mar 2014 11:46:52 -0700 Subject: [Nfd-dev] config file installing In-Reply-To: <85133e2a-42d8-49c4-86ad-bad2776d64f0@email.android.com> References: <9607DD81-59E7-41F8-8D08-9C5C72164BD6@ucla.edu> <85133e2a-42d8-49c4-86ad-bad2776d64f0@email.android.com> Message-ID: <259E2D7B-682E-49ED-90B6-65C0DC66D828@ucla.edu> Ok. Then it simplifies the job and everything is getting more straightforward. Another question. What about creating default.key file that is authorized for everything. Is it also should be deferred to the package manager? --- Alex On Mar 12, 2014, at 11:42 AM, Junxiao Shi wrote: > "./waf install" or "make install" is a command to *install* a software. It just needs to install a **sample** configuration file to /etc/ndn/nfd.conf.sample . > `nfd` program requires a command line argument for the path to configuration file. User must copy the sample cconfiguration file, and pass its path as the argument. > > Package has a post-install step, which should: > > 1. copy the sample to /etc/ndn/nfd.conf , if it doesn?t exist > 2. perform a dry-run, to guard against an incompatible configuration > 3. setup upstart job that has the configuration file path > > Yours, Junxiao > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Wed Mar 12 17:20:53 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 12 Mar 2014 17:20:53 -0700 Subject: [Nfd-dev] question about nrd Message-ID: I did several iterations of updates in nrd repo, but I still have a problem with approving 2 outstanding commits. Neither of them has unit tests, but my primary problem is lack of command interest verification, which exists now in NFD. It doesn't make sense for me to be extremely strict in NFD and yet allow everybody to register prefixes using NRD... Also. After looking more into the code I'm less and less convinced that NRD should be in a separate repository. A lot of things have been already implemented in NFD (logging, command interest verification, config file) and having separate repo basically means that the code needs to be copied (and maintained) in two places, instead of one. How should we proceed? --- Alex From shijunxiao at email.arizona.edu Wed Mar 12 17:30:30 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 12 Mar 2014 17:30:30 -0700 Subject: [Nfd-dev] question about nrd In-Reply-To: References: Message-ID: Logging: trivial code. Command Interest verification: NRD needs more complex logic such as per-namespace authorization. Configuration parser: different file format. If NFD and NRD should be in the same repository because these three modules are shared, same logic would cause NLSR to go into the same repository. I prefer to keep them separate, because they are different programs. Yours, Junxiao Alex Afanasyev wrote: >I did several iterations of updates in nrd repo, but I still have a >problem with approving 2 outstanding commits. Neither of them has >unit tests, but my primary problem is lack of command interest >verification, which exists now in NFD. It doesn't make sense for me to >be extremely strict in NFD and yet allow everybody to register prefixes >using NRD... > >Also. After looking more into the code I'm less and less convinced that >NRD should be in a separate repository. A lot of things have been >already implemented in NFD (logging, command interest verification, >config file) and having separate repo basically means that the code >needs to be copied (and maintained) in two places, instead of one. > >How should we proceed? > >--- >Alex > > >_______________________________________________ >Nfd-dev mailing list >Nfd-dev at lists.cs.ucla.edu >http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Wed Mar 12 17:57:37 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 12 Mar 2014 17:57:37 -0700 Subject: [Nfd-dev] question about nrd In-Reply-To: References: Message-ID: Ok. Let's keep separate... But what should I do with the outstanding commits? Approve and submit without tests and security? Wait for tests and/or security? There are other things like a broken logic---right now NRD will always return status 200 success, even if command to NFD will fail. Should this be fixed before the submission? --- Alex On Mar 12, 2014, at 5:30 PM, Junxiao Shi wrote: > Logging: trivial code. > Command Interest verification: NRD needs more complex logic such as per-namespace authorization. > Configuration parser: different file format. > > If NFD and NRD should be in the same repository because these three modules are shared, same logic would cause NLSR to go into the same repository. > > I prefer to keep them separate, because they are different programs. > > Yours, Junxiao > > > > Alex Afanasyev wrote: > I did several iterations of updates in nrd repo, but I still have a problem with approving 2 outstanding commits. Neither of them has unit tests, but my primary problem is lack of command interest verification, which exists now in NFD. It doesn't make sense for me to be extremely strict in NFD and yet allow everybody to register prefixes using NRD... > > Also. After looking more into the code I'm less and less convinced that NRD should be in a separate repository. A lot of things have been already implemented in NFD (logging, command interest verification, config file) and having separate repo basically means that the code needs to be copied (and maintained) in two places, instead of one. > > How should we proceed? > > --- > Alex > > > > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Mar 12 18:04:51 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 12 Mar 2014 18:04:51 -0700 Subject: [Nfd-dev] question about nrd In-Reply-To: References: Message-ID: Unit testing is required in most Changes. For modules that relies on NFD, unit testing should not launch NFD or connect to NFD socket. Instead, it should provide a mock class for ndn::Face and/or ndn::nrd::Controller, and expect correct calls on the mock. Logic problem such as false positive responses should be fixed before submitting the Change. Command Interest authorization can be in a separate Change. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Wed Mar 12 18:18:04 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Wed, 12 Mar 2014 20:18:04 -0500 Subject: [Nfd-dev] question about nrd In-Reply-To: References: Message-ID: On Wed, Mar 12, 2014 at 7:57 PM, Alex Afanasyev < alexander.afanasyev at ucla.edu> wrote: > Ok. Let's keep separate... But what should I do with the outstanding > commits? Approve and submit without tests and security? Wait for tests > and/or security? > > There are other things like a broken logic---right now NRD will always > return status 200 success, even if command to NFD will fail. Should this > be fixed before the submission? > > We divided the nrd development in several iterations. For iteration 0, the one on the gerrit, nrd just translates the registration commands to the FibCommands. Yes much error checking is not there, especially in the part you mentioned, as this logic going to be changed in later iterations. However, if others decide I can add the error checking. Or you can merge the current version and error checking can be added as a separate feature. Again whatever the majority decides. Regards, Obaid > --- > Alex > > On Mar 12, 2014, at 5:30 PM, Junxiao Shi > wrote: > > Logging: trivial code. > Command Interest verification: NRD needs more complex logic such as > per-namespace authorization. > Configuration parser: different file format. > > If NFD and NRD should be in the same repository because these three > modules are shared, same logic would cause NLSR to go into the same > repository. > > I prefer to keep them separate, because they are different programs. > > Yours, Junxiao > > > Alex Afanasyev wrote: >> >> I did several iterations of updates in nrd repo, but I still have a problem with approving 2 outstanding commits. Neither of them has unit tests, but my primary problem is lack of command interest verification, which exists now in NFD. It doesn't make sense for me to be extremely strict in NFD and yet allow everybody to register prefixes using NRD... >> >> Also. After looking more into the code I'm less and less convinced that NRD should be in a separate repository. A lot of things have been already implemented in NFD (logging, command interest verification, config file) and having separate repo basically means that the code needs to be copied (and maintained) in two places, instead of one. >> >> How should we proceed? >> >> --- >> Alex >> >> >> ------------------------------ >> >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanwang at memphis.edu Wed Mar 12 19:21:49 2014 From: lanwang at memphis.edu (Lan Wang (lanwang)) Date: Thu, 13 Mar 2014 02:21:49 +0000 Subject: [Nfd-dev] question about nrd In-Reply-To: References: Message-ID: <556BB11E-F60E-4AEE-8DE8-04844382C216@memphis.edu> On Mar 12, 2014, at 8:18 PM, Syed Obaid Amin > wrote: On Wed, Mar 12, 2014 at 7:57 PM, Alex Afanasyev > wrote: Ok. Let's keep separate... But what should I do with the outstanding commits? Approve and submit without tests and security? Wait for tests and/or security? There are other things like a broken logic---right now NRD will always return status 200 success, even if command to NFD will fail. Should this be fixed before the submission? We divided the nrd development in several iterations. For iteration 0, the one on the gerrit, nrd just translates the registration commands to the FibCommands. Yes much error checking is not there, especially in the part you mentioned, as this logic going to be changed in later iterations. However, if others decide I can add the error checking. Or you can merge the current version and error checking can be added as a separate feature. Again whatever the majority decides. Why will the error checking change? Since you are changing the data structure in the current iteration, I suppose checking the commands from users and the return values from nfd will be the same. Lan Regards, Obaid --- Alex On Mar 12, 2014, at 5:30 PM, Junxiao Shi > wrote: Logging: trivial code. Command Interest verification: NRD needs more complex logic such as per-namespace authorization. Configuration parser: different file format. If NFD and NRD should be in the same repository because these three modules are shared, same logic would cause NLSR to go into the same repository. I prefer to keep them separate, because they are different programs. Yours, Junxiao Alex Afanasyev > wrote: I did several iterations of updates in nrd repo, but I still have a problem with approving 2 outstanding commits. Neither of them has unit tests, but my primary problem is lack of command interest verification, which exists now in NFD. It doesn't make sense for me to be extremely strict in NFD and yet allow everybody to register prefixes using NRD... Also. After looking more into the code I'm less and less convinced that NRD should be in a separate repository. A lot of things have been already implemented in NFD (logging, command interest verification, config file) and having separate repo basically means that the code needs to be copied (and maintained) in two places, instead of one. How should we proceed? --- Alex ________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Wed Mar 12 22:19:40 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Thu, 13 Mar 2014 00:19:40 -0500 Subject: [Nfd-dev] question about nrd In-Reply-To: <556BB11E-F60E-4AEE-8DE8-04844382C216@memphis.edu> References: <556BB11E-F60E-4AEE-8DE8-04844382C216@memphis.edu> Message-ID: Currently we don't create the fib updates as such. A few fields from prefix registration option received from the application are first stored in a list and then passed to the fib as it is. If the face is invalid a callback is called that can tell nrd that the operation was not successful. However this status is not forwarded to the application. I can add checks here that don't call the callback registered for successful operation until the whole operation is not completed. And also delete the entry from the rib if the operation is not successful. It may work here well but not a good solution for the next iteration as it will incur unnecessary operations (adding a prefix entry into rib, updating corresponding entries, creating fib updates and finally deleting this and corresponding entries from the rib as the face was invalid). Ideally a prefix shouldn't be added to the RIB if the associated face is invalid. This cannot be done now as nrddoesn't have face list but it can get it later when the notification protocol is ready. On Wed, Mar 12, 2014 at 9:21 PM, Lan Wang (lanwang) wrote: > > On Mar 12, 2014, at 8:18 PM, Syed Obaid Amin > wrote: > > > > On Wed, Mar 12, 2014 at 7:57 PM, Alex Afanasyev < > alexander.afanasyev at ucla.edu> wrote: > >> Ok. Let's keep separate... But what should I do with the outstanding >> commits? Approve and submit without tests and security? Wait for tests >> and/or security? >> >> There are other things like a broken logic---right now NRD will always >> return status 200 success, even if command to NFD will fail. Should this >> be fixed before the submission? >> >> We divided the nrd development in several iterations. For iteration 0, > the one on the gerrit, nrd just translates the registration commands to the > FibCommands. Yes much error checking is not there, especially in the part > you mentioned, as this logic going to be changed in later iterations. > However, if others decide I can add the error checking. Or you can merge > the current version and error checking can be added as a separate feature. > Again whatever the majority decides. > > > Why will the error checking change? Since you are changing the data > structure in the current iteration, I suppose checking the commands from > users and the return values from nfd will be the same. > > Lan > > > Regards, > Obaid > >> --- >> Alex >> >> On Mar 12, 2014, at 5:30 PM, Junxiao Shi >> wrote: >> >> Logging: trivial code. >> Command Interest verification: NRD needs more complex logic such as >> per-namespace authorization. >> Configuration parser: different file format. >> >> If NFD and NRD should be in the same repository because these three >> modules are shared, same logic would cause NLSR to go into the same >> repository. >> >> I prefer to keep them separate, because they are different programs. >> >> Yours, Junxiao >> >> >> Alex Afanasyev wrote: >>> >>> I did several iterations of updates in nrd repo, but I still have a problem with approving 2 outstanding commits. Neither of them has unit tests, but my primary problem is lack of command interest verification, which exists now in NFD. It doesn't make sense for me to be extremely strict in NFD and yet allow everybody to register prefixes using NRD... >>> >>> Also. After looking more into the code I'm less and less convinced that NRD should be in a separate repository. A lot of things have been already implemented in NFD (logging, command interest verification, config file) and having separate repo basically means that the code needs to be copied (and maintained) in two places, instead of one. >>> >>> How should we proceed? >>> >>> --- >>> Alex >>> >>> >>> ------------------------------ >>> >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Mar 12 23:45:44 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 12 Mar 2014 23:45:44 -0700 Subject: [Nfd-dev] config file installing In-Reply-To: <259E2D7B-682E-49ED-90B6-65C0DC66D828@ucla.edu> References: <9607DD81-59E7-41F8-8D08-9C5C72164BD6@ucla.edu> <85133e2a-42d8-49c4-86ad-bad2776d64f0@email.android.com> <259E2D7B-682E-49ED-90B6-65C0DC66D828@ucla.edu> Message-ID: Neither "./waf install" nor apt-get is able to create the default key, because private key is owned by the operator user, while "./waf install" and apt-get are executed as root. The sample configuration file's comments should contain instructions on how to generate a key pair, and how to export the public key. Yours, Junxiao On Wed, Mar 12, 2014 at 11:46 AM, Alex Afanasyev < alexander.afanasyev at ucla.edu> wrote: > > What about creating default.key file that is authorized for everything. > Is it also should be deferred to the package manager? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Mar 13 16:32:26 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 13 Mar 2014 16:32:26 -0700 Subject: [Nfd-dev] NFD combine management protocols Message-ID: Dear folks NFD is having too many protocols for management. This proposal is to combine most of these into a single "NFD Management protocol". Changes - "NFD Management protocol" only contains commands that can alter forwarder state. - Every command is signed. - Face Status protocol and FIB Enumeration protocol are not part of this protocol, and will be kept separate. - The semantics of all commands are unchanged. - The encoding of requests has a small change: - FaceManagementOptions, FibManagementOptions, StrategyChoiceOptions are combined into one type - Existing code just needs to use reassigned TLV-TYPE numbers - LocalControlHeader enabling protocol has a syntax change: /localhost/nfd/control-header/enable|disable/ - One or more control modules can be indicated in ControlCommandOptions block New documentation The documentation of "NFD Management protocol" would have the following sections: 1. Control Command - "Control Command" is used by commands that can alter forwarder state. - request format: /localhost/nfd/////// - authentication: a link to [[Command Interests]] - options: definition of ControlCommandOptions block 2. status dataset - This section describes how to segment a collection of status blocks (FaceStatus blocks in Face Status protocol, FibEntry blocks in FIB Enumeration protocol), and how these segments are named 3. Notification mechanism 4. Forwarder Status protocol - returns global counters 5. Face Management protocol - uses: Control Command, status dataset (for status), Notification mechanism (for status change notification) 6. LocalControlHeader Enabling protocol - uses: Control Command 7. FIB Management protocol - uses: Control Command, status dataset (for enumeration) 8. Strategy Choice protocol - uses: Control Command Local Control Header will still have its own page which describes the format of the header. Enabling protocol is linked to the section in "NFD Management protocol". Please give your feedback. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Mar 13 18:08:05 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 13 Mar 2014 18:08:05 -0700 Subject: [Nfd-dev] NFD combine management protocols In-Reply-To: References: Message-ID: <493360AA-BA99-4172-8A25-AD6C34DDEF0B@ucla.edu> On Mar 13, 2014, at 4:32 PM, Junxiao Shi wrote: > Dear folks > > NFD is having too many protocols for management. > This proposal is to combine most of these into a single "NFD Management protocol". > > Changes > "NFD Management protocol" only contains commands that can alter forwarder state. > Every command is signed. > Face Status protocol and FIB Enumeration protocol are not part of this protocol, and will be kept separate. > The semantics of all commands are unchanged. > The encoding of requests has a small change: > FaceManagementOptions, FibManagementOptions, StrategyChoiceOptions are combined into one type > Existing code just needs to use reassigned TLV-TYPE numbers > LocalControlHeader enabling protocol has a syntax change: /localhost/nfd/control-header/enable|disable/ > One or more control modules can be indicated in ControlCommandOptions block > New documentation > The documentation of "NFD Management protocol" would have the following sections: > Control Command > "Control Command" is used by commands that can alter forwarder state. > request format: /localhost/nfd/////// > authentication: a link to [[Command Interests]] > options: definition of ControlCommandOptions block > status dataset > This section describes how to segment a collection of status blocks (FaceStatus blocks in Face Status protocol, FibEntry blocks in FIB Enumeration protocol), and how these segments are named > Notification mechanism > Forwarder Status protocol > returns global counters > Face Management protocol > uses: Control Command, status dataset (for status), Notification mechanism (for status change notification) > LocalControlHeader Enabling protocol > uses: Control Command > FIB Management protocol > uses: Control Command, status dataset (for enumeration) > Strategy Choice protocol > uses: Control Command > > Local Control Header will still have its own page which describes the format of the header. Enabling protocol is linked to the section in "NFD Management protocol". Are you planning to have a huge page that describes all management protocols? I'm not quite sure that it would be right... I prefer keeping specs as separate pages (with proper link to "parent"), not just sections in a big document (it is already quite challenging to navigate within each spec). -- Alex > > Please give your feedback. > > Yours, Junxiao > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Mar 13 20:14:44 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 13 Mar 2014 20:14:44 -0700 Subject: [Nfd-dev] NFD combine management protocols In-Reply-To: <493360AA-BA99-4172-8A25-AD6C34DDEF0B@ucla.edu> References: <493360AA-BA99-4172-8A25-AD6C34DDEF0B@ucla.edu> Message-ID: No, I don't plan to build a huge page. "NFD Management protocol" documentation page has a table of contents with links to section pages. Each section page has its parent set to "NFD Management protocol" (instead of "Wiki"), and has text and link "part of NFD Management protocol" in the first paragraph. NFD Wiki site homepage only has link to only "NFD Management protocol" main page. Yours, Junxiao On Thu, Mar 13, 2014 at 6:08 PM, Alex Afanasyev < alexander.afanasyev at ucla.edu> wrote: > > Are you planning to have a huge page that describes all management > protocols? I'm not quite sure that it would be right... I prefer keeping > specs as separate pages (with proper link to "parent"), not just sections > in a big document (it is already quite challenging to navigate within each > spec). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Fri Mar 14 10:09:08 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 14 Mar 2014 10:09:08 -0700 Subject: [Nfd-dev] NFD combine management protocols In-Reply-To: References: <493360AA-BA99-4172-8A25-AD6C34DDEF0B@ucla.edu> Message-ID: <9C632DED-B512-4485-A028-EDBCC92F1041@ucla.edu> Oh. In this case, I don't have any objections. --- Alex On Mar 13, 2014, at 8:14 PM, Junxiao Shi wrote: > No, I don't plan to build a huge page. > "NFD Management protocol" documentation page has a table of contents with links to section pages. Each section page has its parent set to "NFD Management protocol" (instead of "Wiki"), and has text and link "part of NFD Management protocol" in the first paragraph. > NFD Wiki site homepage only has link to only "NFD Management protocol" main page. > > Yours, Junxiao > > On Thu, Mar 13, 2014 at 6:08 PM, Alex Afanasyev wrote: > Are you planning to have a huge page that describes all management protocols? I'm not quite sure that it would be right... I prefer keeping specs as separate pages (with proper link to "parent"), not just sections in a big document (it is already quite challenging to navigate within each spec). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Fri Mar 14 13:19:31 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 14 Mar 2014 13:19:31 -0700 Subject: [Nfd-dev] outage in ucla network Message-ID: there seem to be a UCLA-wide network outage, so gerrit and redmine is not really working right now... --- Alex From shijunxiao at email.arizona.edu Sat Mar 15 00:35:06 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 15 Mar 2014 00:35:06 -0700 Subject: [Nfd-dev] NFD combine management protocols In-Reply-To: References: Message-ID: Dear folks If anyone has objection to the proposal of combining management protocols, please send feedback before Mar 15 09:00 AM MST. I'll start making the change after this time. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sat Mar 15 09:16:01 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 15 Mar 2014 09:16:01 -0700 Subject: [Nfd-dev] Fwd: TimingWheel Implementation - Need Code Optimization Review In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: "Jerald Paul Abraham" Date: Mar 15, 2014 4:47 AM Subject: TimingWheel Implementation - Need Code Optimization Review To: "Beichuan Zhang" , "Junxiao Shi" < shijunxiao at email.arizona.edu> Cc: Hello Dr.Zhang & Junxiao, I have completed the timing wheel implementation and have been working on its performance in the past week. But its performance does not appear to be as good as expected. Here is a sample comparison result when each is tested against 10000 timers. The value in *RED *is for the timing wheel implementation. The value in *BLUE *is for the existing multiset based implementation. These values are the *average number of nanoseconds by which every timer is off* its expected fire time. jeraldabraham at ndn1:~/src/timingwheel$ build/timingwheel Timer Data Of 20000 Size Average Off For 10000 Timers =* 1.65528e+08* Timer Data Of 20000 Size Average Off For 10000 Timers = *2.95907e+07* jeraldabraham at ndn1:~/src/timingwheel$ build/timingwheel Timer Data Of 20000 Size Average Off For 10000 Timers = *1.76106e+08* Timer Data Of 20000 Size Average Off For 10000 Timers =* 2.86517e+07* jeraldabraham at ndn1:~/src/timingwheel$ build/timingwheel Timer Data Of 20000 Size Average Off For 10000 Timers = *1.65347e+08* Timer Data Of 20000 Size Average Off For 10000 Timers = *3.01807e+07* *The Reason:* The timing wheel implementation although designed to be tickless, uses vector based data structures to hold timer information. The wheels advance only when a fire/request/cancel of a timer occurs. But these vector based data structure management operations take up time and are slower than the existing multiset implementation. *Why a code optimization review may help?* Even the slightest optimization in the timing wheel data structure manipulation operations can bring about visibly significant difference in this timer off metric. If it is therefore possible and we have time, then Junxiao could point out any optimizations in the code, it may prove to be beneficial. I am attaching the code for the implementation herewith. Please let me know if we should go forward for this review. I can make the code available on gerrit but unless its benefit is verified, I think its not event worth of being a patchset for ndn-cpp-dev gerrit project. Please advice. -- Jerald -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: AScheduler.cpp Type: text/x-c++src Size: 29833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: AScheduler.hpp Type: text/x-c++hdr Size: 4255 bytes Desc: not available URL: From shijunxiao at email.arizona.edu Sat Mar 15 09:18:34 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 15 Mar 2014 09:18:34 -0700 Subject: [Nfd-dev] TimingWheel Implementation - Need Code Optimization Review In-Reply-To: References: Message-ID: Hi Jerald When you compile code for performance test, make sure debugging is disabled, and optimization is at highest level. I see one problem beyond compiler optimization: m_wheelEvents is a vector, but there are lots of front-insert and erase operations. A doubly linked list may perform better. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Tue Mar 18 13:25:21 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Tue, 18 Mar 2014 15:25:21 -0500 Subject: [Nfd-dev] Setting up multiple interests. Message-ID: Hello All, I was setting up multiple interest filters in an application. If an interest filter cannot be setup on any one of them then my feeling is the application should quit by calling face.shutdown. However, doing so may not result in graceful shutdown and can cause segmentation faults, as the other interest filter that was successfully set, might be in the middle of some processing. My question is how to handle such scenarios in a reasonable way? One solution might be is to bind one interest filter to one face only but then both faces should be a part of the event loop. Otherwise, we can also print warnings instead of calling shutdown, but I don't think users pay attentions to warnings that much. I'll appreciate any suggestion in this regard. Thanks, Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Tue Mar 18 16:08:05 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 18 Mar 2014 16:08:05 -0700 Subject: [Nfd-dev] Setting up multiple interests. In-Reply-To: References: Message-ID: Hi Obaid You should be listening on all prefixes you need at the same face. If there's segfault at producer or NFD, it shall be a bug. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Tue Mar 18 16:13:16 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 18 Mar 2014 16:13:16 -0700 Subject: [Nfd-dev] Setting up multiple interests. In-Reply-To: References: Message-ID: <1A7663BF-2552-4740-B5D5-7CDD7D0D8F9E@ucla.edu> Hi Obaid, There is no problem of having multiple setInterestFilters on the same face and calling shutdown. Unless you explicitly use boost::thread, everything is executed on a single stream and you would not have any problems. Even if you use boost::thread, you shouldn't have problems, as all operations are serialized, though I'm not 100% confident with shutdown call. In any case, if shutdowning the face when any of the setInterestFilter failed is the desired behavior, don't worry about it. It should work and you don't need to use multiple faces. --- Alex On Mar 18, 2014, at 1:25 PM, Syed Obaid Amin wrote: > Hello All, > > I was setting up multiple interest filters in an application. If an interest filter cannot be setup on any one of them then my feeling is the application should quit by calling face.shutdown. However, doing so may not result in graceful shutdown and can cause segmentation faults, as the other interest filter that was successfully set, might be in the middle of some processing. My question is how to handle such scenarios in a reasonable way? > > One solution might be is to bind one interest filter to one face only but then both faces should be a part of the event loop. Otherwise, we can also print warnings instead of calling shutdown, but I don't think users pay attentions to warnings that much. > > I'll appreciate any suggestion in this regard. > > Thanks, > Obaid > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From alexander.afanasyev at ucla.edu Tue Mar 18 16:39:14 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 18 Mar 2014 16:39:14 -0700 Subject: [Nfd-dev] Small change to build policy Message-ID: <38CCBF05-B6E7-4D12-AF1E-9C906B078FE5@ucla.edu> Hi guys, To be more careful with the code, I'll change the build system to set -Werror flag with NFD is configured in debug mode and I will update jenkins to do the build in debug mode. If you're not yet doing so, I would recommend you to use --debug flag for ./waf configure. This disables all optimizations (= compiler time faster and more info in debugger). --- Alex From shijunxiao at email.arizona.edu Tue Mar 18 19:57:48 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 18 Mar 2014 19:57:48 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS Message-ID: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidepesa at gmail.com Tue Mar 18 21:16:20 2014 From: davidepesa at gmail.com (Davide Pesavento) Date: Wed, 19 Mar 2014 00:16:20 -0400 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: On Mar 18, 2014 10:57 PM, "Junxiao Shi" wrote: > > I want to inform you that NFD will not admit any unsolicited Data from non-local face. Is this really the long term plan? If so, it would be a major problem for us too. V-NDN heavily relies on the ability to cache unsolicited Data packets that are overheard on the wireless channel, disallowing it will break this V-NDN functionality. So I'm asking to provide a way to enable unsolicited Data caching in NFD. A simple per-face boolean flag would do the job, and the performance impact of an additional "if" would be negligible. > they will be the first to get evicted when CS is full. This makes sense, although this decision could be delegated to an independent class (or function) that encapsulates the eviction policy, instead of being hardcoded in the CS code. Thanks, Davide From gpau at cs.ucla.edu Tue Mar 18 21:44:02 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Tue, 18 Mar 2014 21:44:02 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <5C02B162-9B71-4ACB-94C3-58B07D607AB3@cs.ucla.edu> Hi all, I tend to disagree with Junxiao position on this matter We need the Unsolicited data and unsolicited data caching specifically on on the vehicular environment or NDN becomes basically useless in this environment, NFD shall provide a mechanism to enable unsolicited data on request. Thanks g. On Mar 18, 2014, at 9:16 PM, Davide Pesavento wrote: > On Mar 18, 2014 10:57 PM, "Junxiao Shi" wrote: >> >> I want to inform you that NFD will not admit any unsolicited Data from non-local face. > > Is this really the long term plan? If so, it would be a major problem > for us too. V-NDN heavily relies on the ability to cache unsolicited > Data packets that are overheard on the wireless channel, disallowing > it will break this V-NDN functionality. > > So I'm asking to provide a way to enable unsolicited Data caching in > NFD. A simple per-face boolean flag would do the job, and the > performance impact of an additional "if" would be negligible. > >> they will be the first to get evicted when CS is full. > > This makes sense, although this decision could be delegated to an > independent class (or function) that encapsulates the eviction policy, > instead of being hardcoded in the CS code. > > Thanks, > Davide From jburke at remap.ucla.edu Tue Mar 18 22:23:07 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 19 Mar 2014 05:23:07 +0000 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: Message-ID: Junxiao, Is this the final position of the NFD team? Last I remember there was some possibility this would be supported in the future. We plan to provide an alternative in the library, but until we have a high performance and easy to run repo with deletion support, we need this capability in the forwarder ? if nothing else, it's backwards compatibility with what was possible in CCN. If this is to be a repo function (and I can understand the motivation for that) - what is the status of repo performance testing? Do we know that the version to be released with the first version of NFD works with acceptable latency for video and will have delete implemented? Thanks, Jeff From: Junxiao Shi > Date: Tue, 18 Mar 2014 19:57:48 -0700 To: Peter Gusev > Cc: "ndn-app at lists.cs.ucla.edu" >, > Subject: [Nfd-dev] NDN-RTC poke Data to CS Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Tue Mar 18 22:38:31 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Tue, 18 Mar 2014 22:38:31 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <1BCDC80B-DAD3-486C-B13A-F621B9342C53@cs.arizona.edu> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: > Hi Peter > > In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. > > I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. > > You should insert Data into a repository instead. > > Yours, Junxiao > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Tue Mar 18 22:44:52 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 19 Mar 2014 05:44:52 +0000 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: <1BCDC80B-DAD3-486C-B13A-F621B9342C53@cs.arizona.edu> Message-ID: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Tue Mar 18 23:30:11 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Tue, 18 Mar 2014 23:30:11 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <006E7CB1-C42D-4655-B60E-B8B63188D0E6@cs.arizona.edu> How about this workaround? The browser app sends data to a proxy app on a remote node, the proxy has a local face to NFD and pushes data via this face. Since the data come from a local face, NFD will save this unsolicited data in CS. Beichuan On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: > Hi Beichuan, > > Thanks for the further explanation. > > We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) > > thanks, > Jeff > > > From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 > To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS > > In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. > > What Junxiao said is probably what the first release of NFD will have. > > Beichuan > > On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: > >> Hi Peter >> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >> You should insert Data into a repository instead. >> Yours, Junxiao >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Tue Mar 18 23:39:14 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 18 Mar 2014 23:39:14 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <1E39B087-C59B-4948-98CC-21378D971503@ucla.edu> Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: > Hi Beichuan, > > Thanks for the further explanation. > > We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) > > thanks, > Jeff > > > From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 > To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS > > In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. > > What Junxiao said is probably what the first release of NFD will have. > > Beichuan > > On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: > >> Hi Peter >> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >> You should insert Data into a repository instead. >> Yours, Junxiao >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Wed Mar 19 07:33:20 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 19 Mar 2014 14:33:20 +0000 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: <1E39B087-C59B-4948-98CC-21378D971503@ucla.edu> Message-ID: Hi Alex, Thanks for the reply. Comments below. Jeff From: Alex Afanasyev > Date: Tue, 18 Mar 2014 23:39:14 -0700 To: Jeff Burke > Cc: "bzhang at cs.arizona.edu" >, Junxiao Shi >, "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. [jb] Yes, this is all that is needed. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. [jb] Our plan is that browser integration will have two components: 1) native code that provides rtc core functions, hopefully in an add-on/extension; 2) javascript that handles as much as possible, including conference discovery, etc. Neither would be required to use a websockets proxy as they will have access to socket functions via the add-on/extension, though they might to make the js more reusable in other circumstances. We set browser integration aside to get the media handling working so don't know exactly how it's going to go yet. But all unsolicited caching would be "local" - either via a proxy as you mention or with an actual local daemon. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. [jb] Ok. If unsolicited caching for local nodes will work, that's probably all that is needed. Later, we can either 1) provide a cache in the library; 2) create an authenticated mechanism for storing things at the local daemon as Ilya has mentioned; or 3) use a fast local repo. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff > wrote: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Wed Mar 19 07:42:37 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 19 Mar 2014 14:42:37 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <083F3F3C-25C7-4A57-A55C-32372D2351D3@cisco.com> Message-ID: Dave, Sorry, this is a specific case that indeed requires some backstory: - We are only talking about local content injection ? i.e., from app to its local daemon. - This is a "feature" of NDNx/CCNx used in applications like our rtc implementation to provide a local buffer for publishing. Frames get pushed out as soon as they are captured, and the app itself doesn't have to worry about buffering them. The main downside is actually that you don't know how long the frames will really stay in the content store. - We do something similar (dumping frames as captured) in video playout with a repository. - I'm only concerned with preserving support for this in the initial NFD release so we can quickly test the application as is. There are three other solutions that seem better: 1) implement an application-side buffer in the library; we will do this soon; 2) provide an authenticated mechanism to control how the local daemon handles such pushed data; 3) have a fast local repo implementation that can be used for the same purpose. But there's no unsolicited content injection into the network. Jeff From: "Dave Oran (oran)" > Date: Wed, 19 Mar 2014 11:06:31 +0000 To: Alex Afanasyev > Cc: Jeff Burke >, "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "bzhang at cs.arizona.edu" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS Abject confusion. I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. DaveO. On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" > wrote: Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff > wrote: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Ndn-app mailing list Ndn-app at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Wed Mar 19 09:46:03 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 19 Mar 2014 16:46:03 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: Message-ID: Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) Probably already happening. :) One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? Yes, this is all true. But as I understand it the content store gives no guarantee of what content will persist if it receives more objects than it can hold. For now, this is not an issue but when there are many apps moving content through the same store on a given node, we can't count on objects persisting. This is why a repo is probably ultimately the right place for this data to go, as you suggest. It has the added benefit of providing content archival, too. Jeff From: "Dave Oran (oran)" > Date: Wed, 19 Mar 2014 14:52:29 +0000 To: Jeff Burke > Cc: Alex Afanasyev >, "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "bzhang at cs.arizona.edu" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS OK, thanks for the clarification. Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) Among the implementation options you list below, it seems a fast Repo would be the best way forward, given that the lack of a fast repo would like cause more hacks to migrate either up to the application or down to the forwarder. One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? On Mar 19, 2014, at 10:42 AM, Burke, Jeff > wrote: Dave, Sorry, this is a specific case that indeed requires some backstory: - We are only talking about local content injection ? i.e., from app to its local daemon. - This is a "feature" of NDNx/CCNx used in applications like our rtc implementation to provide a local buffer for publishing. Frames get pushed out as soon as they are captured, and the app itself doesn't have to worry about buffering them. The main downside is actually that you don't know how long the frames will really stay in the content store. - We do something similar (dumping frames as captured) in video playout with a repository. - I'm only concerned with preserving support for this in the initial NFD release so we can quickly test the application as is. There are three other solutions that seem better: 1) implement an application-side buffer in the library; we will do this soon; 2) provide an authenticated mechanism to control how the local daemon handles such pushed data; 3) have a fast local repo implementation that can be used for the same purpose. But there's no unsolicited content injection into the network. Jeff From: "Dave Oran (oran)" > Date: Wed, 19 Mar 2014 11:06:31 +0000 To: Alex Afanasyev > Cc: Jeff Burke >, "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "bzhang at cs.arizona.edu" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS Abject confusion. I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. DaveO. On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" > wrote: Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff > wrote: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Ndn-app mailing list Ndn-app at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: From iliamo at ucla.edu Wed Mar 19 10:16:31 2014 From: iliamo at ucla.edu (Ilya Moiseenko) Date: Wed, 19 Mar 2014 10:16:31 -0700 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: On Mar 19, 2014, at 10:03 AM, Dave Oran (oran) wrote: > > On Mar 19, 2014, at 12:46 PM, Burke, Jeff wrote: > >> Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) >> >> Probably already happening. :) >> >> One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? >> >> Yes, this is all true. But as I understand it the content store gives no guarantee of what content will persist if it receives more objects than it can hold. For now, this is not an issue but when there are many apps moving content through the same store on a given node, we can't count on objects persisting. This is why a repo is probably ultimately the right place for this data to go, as you suggest. It has the added benefit of providing content archival, too. >> > that?s what I suspected. I?d like to get involved in discussions around this. We are busy designing and building our wire-speed cache, and face the problem of intelligent CS eviction with a very small cycle budget at the scale we are running. Having enough information (e.g. desired holding time as well as maximum useful lifetime), in a form that can drive an eviction algorithm that avoids cache thrashing a pretty big part of our design analysis right now. Hello Dave, In our implementation we do not have a periodic cache cleanup as it is done in CCNx. CS entries are evicted during the insertion. I expect good performance because it is cheap constant time operation, but we don?t have firm numbers right now. Ilya > Thanks again, > DaveO. > >> Jeff >> >> >> From: "Dave Oran (oran)" >> Date: Wed, 19 Mar 2014 14:52:29 +0000 >> To: Jeff Burke >> Cc: Alex Afanasyev , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "bzhang at cs.arizona.edu" , "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS >> >> OK, thanks for the clarification. Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) >> >> Among the implementation options you list below, it seems a fast Repo would be the best way forward, given that the lack of a fast repo would like cause more hacks to migrate either up to the application or down to the forwarder. >> >> One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? >> >> On Mar 19, 2014, at 10:42 AM, Burke, Jeff wrote: >> >>> >>> Dave, >>> >>> Sorry, this is a specific case that indeed requires some backstory: >>> >>> - We are only talking about local content injection ? i.e., from app to its local daemon. >>> >>> - This is a "feature" of NDNx/CCNx used in applications like our rtc implementation to provide a local buffer for publishing. Frames get pushed out as soon as they are captured, and the app itself doesn't have to worry about buffering them. The main downside is actually that you don't know how long the frames will really stay in the content store. >>> >>> - We do something similar (dumping frames as captured) in video playout with a repository. >>> >>> - I'm only concerned with preserving support for this in the initial NFD release so we can quickly test the application as is. There are three other solutions that seem better: 1) implement an application-side buffer in the library; we will do this soon; 2) provide an authenticated mechanism to control how the local daemon handles such pushed data; 3) have a fast local repo implementation that can be used for the same purpose. >>> >>> But there's no unsolicited content injection into the network. >>> >>> Jeff >>> >>> From: "Dave Oran (oran)" >>> Date: Wed, 19 Mar 2014 11:06:31 +0000 >>> To: Alex Afanasyev >>> Cc: Jeff Burke , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "bzhang at cs.arizona.edu" , "nfd-dev at lists.cs.ucla.edu" >>> Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS >>> >>> Abject confusion. >>> >>> I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. >>> >>> Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). >>> >>> Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. >>> >>> DaveO. >>> >>> On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" wrote: >>> >>>> Hi Jeff, >>>> >>>> Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. >>>> >>>> I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. >>>> >>>> In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. >>>> >>>> --- >>>> Alex >>>> >>>> On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: >>>> >>>>> Hi Beichuan, >>>>> >>>>> Thanks for the further explanation. >>>>> >>>>> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >>>>> >>>>> thanks, >>>>> Jeff >>>>> >>>>> >>>>> From: "bzhang at cs.arizona.edu" >>>>> Date: Tue, 18 Mar 2014 22:38:31 -0700 >>>>> To: Junxiao Shi >>>>> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >>>>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>>>> >>>>> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >>>>> >>>>> What Junxiao said is probably what the first release of NFD will have. >>>>> >>>>> Beichuan >>>>> >>>>> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >>>>> >>>>>> Hi Peter >>>>>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>>>>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>>>>> You should insert Data into a repository instead. >>>>>> Yours, Junxiao >>>>>> _______________________________________________ >>>>>> Nfd-dev mailing list >>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>> >>>>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> >>>> _______________________________________________ >>>> Ndn-app mailing list >>>> Ndn-app at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app >> > > _______________________________________________ > Ndn-app mailing list > Ndn-app at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: From oran at cisco.com Wed Mar 19 04:06:31 2014 From: oran at cisco.com (Dave Oran (oran)) Date: Wed, 19 Mar 2014 11:06:31 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <1E39B087-C59B-4948-98CC-21378D971503@ucla.edu> References: , <1E39B087-C59B-4948-98CC-21378D971503@ucla.edu> Message-ID: <083F3F3C-25C7-4A57-A55C-32372D2351D3@cisco.com> Abject confusion. I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. DaveO. On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" > wrote: Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff > wrote: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Ndn-app mailing list Ndn-app at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: From oran at cisco.com Wed Mar 19 07:52:29 2014 From: oran at cisco.com (Dave Oran (oran)) Date: Wed, 19 Mar 2014 14:52:29 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: OK, thanks for the clarification. Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) Among the implementation options you list below, it seems a fast Repo would be the best way forward, given that the lack of a fast repo would like cause more hacks to migrate either up to the application or down to the forwarder. One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? On Mar 19, 2014, at 10:42 AM, Burke, Jeff wrote: > > Dave, > > Sorry, this is a specific case that indeed requires some backstory: > > - We are only talking about local content injection ? i.e., from app to its local daemon. > > - This is a "feature" of NDNx/CCNx used in applications like our rtc implementation to provide a local buffer for publishing. Frames get pushed out as soon as they are captured, and the app itself doesn't have to worry about buffering them. The main downside is actually that you don't know how long the frames will really stay in the content store. > > - We do something similar (dumping frames as captured) in video playout with a repository. > > - I'm only concerned with preserving support for this in the initial NFD release so we can quickly test the application as is. There are three other solutions that seem better: 1) implement an application-side buffer in the library; we will do this soon; 2) provide an authenticated mechanism to control how the local daemon handles such pushed data; 3) have a fast local repo implementation that can be used for the same purpose. > > But there's no unsolicited content injection into the network. > > Jeff > > From: "Dave Oran (oran)" > Date: Wed, 19 Mar 2014 11:06:31 +0000 > To: Alex Afanasyev > Cc: Jeff Burke , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "bzhang at cs.arizona.edu" , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS > > Abject confusion. > > I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. > > Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). > > Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. > > DaveO. > > On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" wrote: > >> Hi Jeff, >> >> Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. >> >> I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. >> >> In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. >> >> --- >> Alex >> >> On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: >> >>> Hi Beichuan, >>> >>> Thanks for the further explanation. >>> >>> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >>> >>> thanks, >>> Jeff >>> >>> >>> From: "bzhang at cs.arizona.edu" >>> Date: Tue, 18 Mar 2014 22:38:31 -0700 >>> To: Junxiao Shi >>> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>> >>> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >>> >>> What Junxiao said is probably what the first release of NFD will have. >>> >>> Beichuan >>> >>> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >>> >>>> Hi Peter >>>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>>> You should insert Data into a repository instead. >>>> Yours, Junxiao >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ >> Ndn-app mailing list >> Ndn-app at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oran at cisco.com Wed Mar 19 10:03:34 2014 From: oran at cisco.com (Dave Oran (oran)) Date: Wed, 19 Mar 2014 17:03:34 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: On Mar 19, 2014, at 12:46 PM, Burke, Jeff wrote: > Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) > > Probably already happening. :) > > One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? > > Yes, this is all true. But as I understand it the content store gives no guarantee of what content will persist if it receives more objects than it can hold. For now, this is not an issue but when there are many apps moving content through the same store on a given node, we can't count on objects persisting. This is why a repo is probably ultimately the right place for this data to go, as you suggest. It has the added benefit of providing content archival, too. > that?s what I suspected. I?d like to get involved in discussions around this. We are busy designing and building our wire-speed cache, and face the problem of intelligent CS eviction with a very small cycle budget at the scale we are running. Having enough information (e.g. desired holding time as well as maximum useful lifetime), in a form that can drive an eviction algorithm that avoids cache thrashing a pretty big part of our design analysis right now. Thanks again, DaveO. > Jeff > > > From: "Dave Oran (oran)" > Date: Wed, 19 Mar 2014 14:52:29 +0000 > To: Jeff Burke > Cc: Alex Afanasyev , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "bzhang at cs.arizona.edu" , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS > > OK, thanks for the clarification. Test question: how long do you think it will be before somebody jiggers the TCP connection to home on a remote box :-) > > Among the implementation options you list below, it seems a fast Repo would be the best way forward, given that the lack of a fast repo would like cause more hacks to migrate either up to the application or down to the forwarder. > > One thing you said below did confuse me: "you don't know how long the frames will really stay in the content store.? Don?t all data objects have a timeout? Why would you not set the timeout to the playout deadline established by the application? Or are you concerned about eviction too soon as opposed to them hanging around too long? > > On Mar 19, 2014, at 10:42 AM, Burke, Jeff wrote: > >> >> Dave, >> >> Sorry, this is a specific case that indeed requires some backstory: >> >> - We are only talking about local content injection ? i.e., from app to its local daemon. >> >> - This is a "feature" of NDNx/CCNx used in applications like our rtc implementation to provide a local buffer for publishing. Frames get pushed out as soon as they are captured, and the app itself doesn't have to worry about buffering them. The main downside is actually that you don't know how long the frames will really stay in the content store. >> >> - We do something similar (dumping frames as captured) in video playout with a repository. >> >> - I'm only concerned with preserving support for this in the initial NFD release so we can quickly test the application as is. There are three other solutions that seem better: 1) implement an application-side buffer in the library; we will do this soon; 2) provide an authenticated mechanism to control how the local daemon handles such pushed data; 3) have a fast local repo implementation that can be used for the same purpose. >> >> But there's no unsolicited content injection into the network. >> >> Jeff >> >> From: "Dave Oran (oran)" >> Date: Wed, 19 Mar 2014 11:06:31 +0000 >> To: Alex Afanasyev >> Cc: Jeff Burke , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "bzhang at cs.arizona.edu" , "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Ndn-app] [Nfd-dev] NDN-RTC poke Data to CS >> >> Abject confusion. >> >> I thought one of the deep tenets of NDN was that it would be infeasible to inject unsolicited data into the network, thus eliminating all forms of flooding attacks other than of Interest messages. >> >> Snooping a broadcast wire is a special case where the data was in fact solicited (and the interest could have been snooped too). >> >> Sorry, but I clearly have not been privy to any of the backstory here, so it kind of hit me out of the blue. >> >> DaveO. >> >> On Mar 19, 2014, at 2:39 AM, "Alex Afanasyev" wrote: >> >>> Hi Jeff, >>> >>> Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. >>> >>> I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. >>> >>> In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. >>> >>> --- >>> Alex >>> >>> On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: >>> >>>> Hi Beichuan, >>>> >>>> Thanks for the further explanation. >>>> >>>> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >>>> >>>> thanks, >>>> Jeff >>>> >>>> >>>> From: "bzhang at cs.arizona.edu" >>>> Date: Tue, 18 Mar 2014 22:38:31 -0700 >>>> To: Junxiao Shi >>>> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >>>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>>> >>>> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >>>> >>>> What Junxiao said is probably what the first release of NFD will have. >>>> >>>> Beichuan >>>> >>>> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >>>> >>>>> Hi Peter >>>>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>>>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>>>> You should insert Data into a repository instead. >>>>> Yours, Junxiao >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> >>>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ >>> Ndn-app mailing list >>> Ndn-app at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From peter at remap.ucla.edu Wed Mar 19 10:08:31 2014 From: peter at remap.ucla.edu (Gusev, Peter) Date: Wed, 19 Mar 2014 17:08:31 +0000 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <6A83162B-33AC-4995-8CC7-BAD5EDDEAEA9@remap.ucla.edu> Hi Junxiao, Thanks for your concerns. I should have mentioned this on the slides - I plan to have an "in-memory" content store in the app and handle incoming interests for the segments manually, so I do not need to rely on the presence of local daemon. However, with the scheme when an instance of a forwarder will be inside the browser, this scheme may change and allow app to poke data straight into it. Though it's unclear now. As the main reason why we decided to avoid using repository - is the access time to the data, which is apparently larger then the access to the "in-memory" CS. Thanks, -- Peter Gusev peter at remap.ucla.edu +1 213 5872748 (USA) +7 916 4434826 (Russia) +37 259 226448 (in case any other number is unavailable) peetonn_ (skype) On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Mar 20 14:43:08 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 20 Mar 2014 14:43:08 -0700 Subject: [Nfd-dev] Interest lifetime limit Message-ID: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> I just got a question about the interest lifetime limit. Do we want to have any kind of (configurable) limit in NFD on maximum amount of time interest is allowed to be in PIT? --- Alex From christos at cs.colostate.edu Thu Mar 20 16:02:09 2014 From: christos at cs.colostate.edu (Christos Papadopoulos) Date: Thu, 20 Mar 2014 17:02:09 -0600 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> References: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> Message-ID: <532B7371.9090402@cs.colostate.edu> On 03/20/2014 03:43 PM, Alex Afanasyev wrote: > I just got a question about the interest lifetime limit. Do we want to have any kind of (configurable) limit in NFD on maximum amount of time interest is allowed to be in PIT? > From a robustness point of view the answer seems to be "yes". Is there a reason not to have a limit? A simple, configurable limit seems prudent, if nothing else as a first line of defense against buggy applications. The interesting question is what to do now when the limit is reached. Drop tail? Rate limit? Those about to time out? Something else? Christos. > --- > Alex > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From shijunxiao at email.arizona.edu Thu Mar 20 17:02:07 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 20 Mar 2014 17:02:07 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <532B7371.9090402@cs.colostate.edu> References: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> <532B7371.9090402@cs.colostate.edu> Message-ID: A practical upper bound for InterestLifetime is necessary. I suggest a value around 32768ms. If an Interest has InterestLifetime larger than the upper bound, incoming Interest pipeline should set this field to the upper bound. Subsequent processing, including what is sent out, behaves as if InterestLifetime equals the upper bound. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Mar 20 17:04:38 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 20 Mar 2014 17:04:38 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> <532B7371.9090402@cs.colostate.edu> Message-ID: <29A1A58E-832C-45DA-AB92-7419AE92D856@ucla.edu> Why this specific value? Can we just have something like a minute or so? --- Alex On Mar 20, 2014, at 5:02 PM, Junxiao Shi wrote: > A practical upper bound for InterestLifetime is necessary. I suggest a value around 32768ms. > > If an Interest has InterestLifetime larger than the upper bound, incoming Interest pipeline should set this field to the upper bound. Subsequent processing, including what is sent out, behaves as if InterestLifetime equals the upper bound. > > Yours, Junxiao > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lixia at cs.ucla.edu Thu Mar 20 19:48:06 2014 From: lixia at cs.ucla.edu (Lixia Zhang) Date: Thu, 20 Mar 2014 19:48:06 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <29A1A58E-832C-45DA-AB92-7419AE92D856@ucla.edu> References: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> <532B7371.9090402@cs.colostate.edu> <29A1A58E-832C-45DA-AB92-7419AE92D856@ucla.edu> Message-ID: <007C848E-3662-49EF-83F0-597A1384DE9E@cs.ucla.edu> On Mar 20, 2014, at 5:04 PM, Alex Afanasyev wrote: > Why this specific value? Can we just have something like a minute or so? as far as I can tell, it is 2^15, i.e. positive integer for a 2-byte value (32sec, not too far from 1 min :-) > > On Mar 20, 2014, at 5:02 PM, Junxiao Shi wrote: > >> A practical upper bound for InterestLifetime is necessary. I suggest a value around 32768ms. >> >> If an Interest has InterestLifetime larger than the upper bound, incoming Interest pipeline should set this field to the upper bound. Subsequent processing, including what is sent out, behaves as if InterestLifetime equals the upper bound. >> >> Yours, Junxiao >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at cs.ucla.edu Thu Mar 20 21:26:10 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Thu, 20 Mar 2014 21:26:10 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <007C848E-3662-49EF-83F0-597A1384DE9E@cs.ucla.edu> References: <81ACEFA7-B775-4540-9324-4A98426C9661@ucla.edu> <532B7371.9090402@cs.colostate.edu> <29A1A58E-832C-45DA-AB92-7419AE92D856@ucla.edu> <007C848E-3662-49EF-83F0-597A1384DE9E@cs.ucla.edu> Message-ID: I would not bound it in this way is not better to express it as a 2 byte in seconds so we have the flexibility for future applications? /g. ========================== It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. - Leonardo da Vinci ========================== On Mar 20, 2014, at 7:48 PM, Lixia Zhang wrote: > > On Mar 20, 2014, at 5:04 PM, Alex Afanasyev wrote: > >> Why this specific value? Can we just have something like a minute or so? > > as far as I can tell, it is 2^15, i.e. positive integer for a 2-byte value > (32sec, not too far from 1 min :-) > > >> >> On Mar 20, 2014, at 5:02 PM, Junxiao Shi wrote: >> >>> A practical upper bound for InterestLifetime is necessary. I suggest a value around 32768ms. >>> >>> If an Interest has InterestLifetime larger than the upper bound, incoming Interest pipeline should set this field to the upper bound. Subsequent processing, including what is sent out, behaves as if InterestLifetime equals the upper bound. >>> >>> Yours, Junxiao >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From jburke at remap.ucla.edu Thu Mar 20 21:37:25 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Fri, 21 Mar 2014 04:37:25 +0000 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: Message-ID: I don't know that we have enough evidence for how this might be used to restrict it to ~32 seconds maximum. This isn't even 10x the default value and doesn't approach what came up in the unresolved "long-lived interest" discussion. On the other hand, millisecond resolution (or decisecond resolution, at least) seems like it might also be useful in some cases. Is there a strong reason for this to be 2 bytes? Could we have a 4-byte long in ms instead, and have the maximum value correspond to a "hold it as long as you're willing" behavior? Is FreshnessPeriod also 2 bytes? This seems similarly limiting. This wouldn't handle our canonical NYTimes front page example. Jeff On 3/20/14, 9:26 PM, "Giovanni Pau" wrote: >I would not bound it in this way is not better to express it as a 2 byte >in seconds so we have the flexibility for future applications? > >/g. >========================== >It had long since come to my attention that people of accomplishment >rarely sat back and let things happen to them. They went out and happened >to things. > >- Leonardo da Vinci >========================== > > > > >On Mar 20, 2014, at 7:48 PM, Lixia Zhang wrote: > >> >> On Mar 20, 2014, at 5:04 PM, Alex Afanasyev >> wrote: >> >>> Why this specific value? Can we just have something like a minute or >>>so? >> >> as far as I can tell, it is 2^15, i.e. positive integer for a 2-byte >>value >> (32sec, not too far from 1 min :-) >> >> >>> >>> On Mar 20, 2014, at 5:02 PM, Junxiao Shi >>> wrote: >>> >>>> A practical upper bound for InterestLifetime is necessary. I suggest >>>>a value around 32768ms. >>>> >>>> If an Interest has InterestLifetime larger than the upper bound, >>>>incoming Interest pipeline should set this field to the upper bound. >>>>Subsequent processing, including what is sent out, behaves as if >>>>InterestLifetime equals the upper bound. >>>> >>>> Yours, Junxiao >>>> >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > >_______________________________________________ >Nfd-dev mailing list >Nfd-dev at lists.cs.ucla.edu >http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From shijunxiao at email.arizona.edu Thu Mar 20 21:41:28 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 20 Mar 2014 21:41:28 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: Message-ID: Both InterestLifetime and FreshnessPeriod are nonNegativeInteger which could be up to 8 octets. They are measured in milliseconds. Yours, Junxiao On Mar 20, 2014 9:38 PM, "Burke, Jeff" wrote: > > > I don't know that we have enough evidence for how this might be used to > restrict it to ~32 seconds maximum. This isn't even 10x the default value > and doesn't approach what came up in the unresolved "long-lived interest" > discussion. On the other hand, millisecond resolution (or decisecond > resolution, at least) seems like it might also be useful in some cases. > > Is there a strong reason for this to be 2 bytes? Could we have a 4-byte > long in ms instead, and have the maximum value correspond to a "hold it as > long as you're willing" behavior? > > Is FreshnessPeriod also 2 bytes? This seems similarly limiting. This > wouldn't handle our canonical NYTimes front page example. > > Jeff > > > On 3/20/14, 9:26 PM, "Giovanni Pau" wrote: > > >I would not bound it in this way is not better to express it as a 2 byte > >in seconds so we have the flexibility for future applications? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at cs.ucla.edu Thu Mar 20 21:52:24 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Thu, 20 Mar 2014 21:52:24 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: Message-ID: Junxiao, sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. Thanks g. ========================== It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. - Leonardo da Vinci ========================== On Mar 20, 2014, at 9:41 PM, Junxiao Shi wrote: > Both InterestLifetime and FreshnessPeriod are nonNegativeInteger which could be up to 8 octets. They are measured in milliseconds. > > Yours, Junxiao > On Mar 20, 2014 9:38 PM, "Burke, Jeff" wrote: > > > > > > I don't know that we have enough evidence for how this might be used to > > restrict it to ~32 seconds maximum. This isn't even 10x the default value > > and doesn't approach what came up in the unresolved "long-lived interest" > > discussion. On the other hand, millisecond resolution (or decisecond > > resolution, at least) seems like it might also be useful in some cases. > > > > Is there a strong reason for this to be 2 bytes? Could we have a 4-byte > > long in ms instead, and have the maximum value correspond to a "hold it as > > long as you're willing" behavior? > > > > Is FreshnessPeriod also 2 bytes? This seems similarly limiting. This > > wouldn't handle our canonical NYTimes front page example. > > > > Jeff > > > > > > On 3/20/14, 9:26 PM, "Giovanni Pau" wrote: > > > > >I would not bound it in this way is not better to express it as a 2 byte > > >in seconds so we have the flexibility for future applications? > > > > From shijunxiao at email.arizona.edu Thu Mar 20 22:17:37 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 20 Mar 2014 22:17:37 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: Message-ID: 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: * Protocol should not set an upper bound on InterestLifetime. * Setting a *practical* upper bound is a policy issue, configurable by operator. My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: * Most applications don't need any special lifetime. * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. * Forwarder cannot afford long lifetime because PIT entries consume memory. * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. Yours, Junxiao On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: > > Junxiao, > > sorry i can't get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. > > Thanks > g. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Fri Mar 21 07:29:21 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Fri, 21 Mar 2014 14:29:21 +0000 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: Message-ID: If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? Jeff From: Junxiao Shi > Date: Thu, 20 Mar 2014 22:17:37 -0700 To: Giovanni Pau > Cc: Jeff Burke >, >, Lixia Zhang > Subject: Re: [Nfd-dev] Interest lifetime limit 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: * Protocol should not set an upper bound on InterestLifetime. * Setting a *practical* upper bound is a policy issue, configurable by operator. My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: * Most applications don't need any special lifetime. * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. * Forwarder cannot afford long lifetime because PIT entries consume memory. * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. Yours, Junxiao On Mar 20, 2014 9:52 PM, "Giovanni Pau" > wrote: > > Junxiao, > > sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. > > Thanks > g. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Fri Mar 21 09:57:56 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 21 Mar 2014 09:57:56 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: Message-ID: Hi all, In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). --- Alex On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: > > If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. > > In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. > > Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? > > Jeff > > From: Junxiao Shi > Date: Thu, 20 Mar 2014 22:17:37 -0700 > To: Giovanni Pau > Cc: Jeff Burke , , Lixia Zhang > Subject: Re: [Nfd-dev] Interest lifetime limit > > 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: > * Protocol should not set an upper bound on InterestLifetime. > * Setting a *practical* upper bound is a policy issue, configurable by operator. > My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: > * Most applications don't need any special lifetime. > * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. > * Forwarder cannot afford long lifetime because PIT entries consume memory. > * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. > Yours, Junxiao > On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: > > > > Junxiao, > > > > sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. > > > > Thanks > > g. > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Sat Mar 22 10:47:17 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Sat, 22 Mar 2014 17:47:17 +0000 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: Message-ID: Hi, I agree with Alex - I am not suggesting either infinite lifetimes or hard state. What I am suggesting is: 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. Thanks, Jeff From: Alex Afanasyev > Date: Fri, 21 Mar 2014 09:57:56 -0700 To: Jeff Burke > Cc: Junxiao Shi >, Giovanni Pau >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Interest lifetime limit Hi all, In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). --- Alex On Mar 21, 2014, at 7:29 AM, Burke, Jeff > wrote: If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? Jeff From: Junxiao Shi > Date: Thu, 20 Mar 2014 22:17:37 -0700 To: Giovanni Pau > Cc: Jeff Burke >, >, Lixia Zhang > Subject: Re: [Nfd-dev] Interest lifetime limit 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: * Protocol should not set an upper bound on InterestLifetime. * Setting a *practical* upper bound is a policy issue, configurable by operator. My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: * Most applications don't need any special lifetime. * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. * Forwarder cannot afford long lifetime because PIT entries consume memory. * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. Yours, Junxiao On Mar 20, 2014 9:52 PM, "Giovanni Pau" > wrote: > > Junxiao, > > sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. > > Thanks > g. _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Sat Mar 22 10:58:26 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Sat, 22 Mar 2014 17:58:26 +0000 Subject: [Nfd-dev] Name component format Message-ID: Hi, There are a few changes to the representation of names in the TLV spec (http://named-data.net/doc/NDN-TLV/0.2/name.html) that I am not sure have been widely discussed. In particular, the introduction of types (beyond distinguishing the implicit digest), an updated URI representation, and the inability to specify empty name components. Are these considered "baked"? Would it be possible to discuss these at some point in more detail? Among other things, typing components unless required by the protocol (as seems to be the case with the implicit hash) seems to run counter to the notion of name opaqueness, and there are some conflicts in the URI representation that need to be resolved. Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Sat Mar 22 12:08:56 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Sat, 22 Mar 2014 19:08:56 +0000 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: Message-ID: Just to close this thread: In the first release, NFD will not have unsolicited caching for local apps? So we need to transition the ndnrtc application and any others to have application-side buffers or use a repo? jeff From: Jeff Burke > Date: Wed, 19 Mar 2014 14:33:20 +0000 To: Alex Afanasyev > Cc: "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS Hi Alex, Thanks for the reply. Comments below. Jeff From: Alex Afanasyev > Date: Tue, 18 Mar 2014 23:39:14 -0700 To: Jeff Burke > Cc: "bzhang at cs.arizona.edu" >, Junxiao Shi >, "ndn-app at lists.cs.ucla.edu" >, "Gusev, Peter" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS Hi Jeff, Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. [jb] Yes, this is all that is needed. I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. [jb] Our plan is that browser integration will have two components: 1) native code that provides rtc core functions, hopefully in an add-on/extension; 2) javascript that handles as much as possible, including conference discovery, etc. Neither would be required to use a websockets proxy as they will have access to socket functions via the add-on/extension, though they might to make the js more reusable in other circumstances. We set browser integration aside to get the media handling working so don't know exactly how it's going to go yet. But all unsolicited caching would be "local" - either via a proxy as you mention or with an actual local daemon. In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. [jb] Ok. If unsolicited caching for local nodes will work, that's probably all that is needed. Later, we can either 1) provide a cache in the library; 2) create an authenticated mechanism for storing things at the local daemon as Ilya has mentioned; or 3) use a fast local repo. --- Alex On Mar 18, 2014, at 10:44 PM, Burke, Jeff > wrote: Hi Beichuan, Thanks for the further explanation. We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) thanks, Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 18 Mar 2014 22:38:31 -0700 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, Peter Gusev >, > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. What Junxiao said is probably what the first release of NFD will have. Beichuan On Mar 18, 2014, at 7:57 PM, Junxiao Shi > wrote: Hi Peter In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. You should insert Data into a repository instead. Yours, Junxiao _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Sat Mar 22 12:11:20 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sat, 22 Mar 2014 12:11:20 -0700 Subject: [Nfd-dev] NDN-RTC poke Data to CS In-Reply-To: References: Message-ID: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> As Junxiao mentioned several time, first release WILL have unsolicited caching for LOCAL apps. --- Alex On Mar 22, 2014, at 12:08 PM, Burke, Jeff wrote: > > Just to close this thread: In the first release, NFD will not have unsolicited caching for local apps? > > So we need to transition the ndnrtc application and any others to have application-side buffers or use a repo? > > jeff > > From: Jeff Burke > Date: Wed, 19 Mar 2014 14:33:20 +0000 > To: Alex Afanasyev > Cc: "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS > > Hi Alex, > Thanks for the reply. Comments below. > Jeff > > From: Alex Afanasyev > Date: Tue, 18 Mar 2014 23:39:14 -0700 > To: Jeff Burke > Cc: "bzhang at cs.arizona.edu" , Junxiao Shi , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS > > Hi Jeff, > > Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. > > [jb] Yes, this is all that is needed. > > I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. > > [jb] Our plan is that browser integration will have two components: 1) native code that provides rtc core functions, hopefully in an add-on/extension; 2) javascript that handles as much as possible, including conference discovery, etc. Neither would be required to use a websockets proxy as they will have access to socket functions via the add-on/extension, though they might to make the js more reusable in other circumstances. We set browser integration aside to get the media handling working so don't know exactly how it's going to go yet. But all unsolicited caching would be "local" - either via a proxy as you mention or with an actual local daemon. > > In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. > > [jb] Ok. If unsolicited caching for local nodes will work, that's probably all that is needed. Later, we can either 1) provide a cache in the library; 2) create an authenticated mechanism for storing things at the local daemon as Ilya has mentioned; or 3) use a fast local repo. > > --- > Alex > > On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: > >> Hi Beichuan, >> >> Thanks for the further explanation. >> >> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >> >> thanks, >> Jeff >> >> >> From: "bzhang at cs.arizona.edu" >> Date: Tue, 18 Mar 2014 22:38:31 -0700 >> To: Junxiao Shi >> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >> >> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >> >> What Junxiao said is probably what the first release of NFD will have. >> >> Beichuan >> >> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >> >>> Hi Peter >>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>> You should insert Data into a repository instead. >>> Yours, Junxiao >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lixia at cs.ucla.edu Sat Mar 22 13:13:49 2014 From: lixia at cs.ucla.edu (Lixia Zhang) Date: Sat, 22 Mar 2014 13:13:49 -0700 Subject: [Nfd-dev] Name component format In-Reply-To: References: Message-ID: <2B420430-013C-409D-BDA7-A240A7550D60@cs.ucla.edu> On Mar 22, 2014, at 10:58 AM, "Burke, Jeff" wrote: > Hi, > > There are a few changes to the representation of names in the TLV spec (http://named-data.net/doc/NDN-TLV/0.2/name.html) that I am not sure have been widely discussed. In particular, the introduction of types (beyond distinguishing the implicit digest), an updated URI representation, and the inability to specify empty name components. > > Are these considered "baked"? Would it be possible to discuss these at some point in more detail? Hi Jeff, the changes were made after some discussions among the NFD team, then with Van. Not sure what you meant by "considered baked". . . - I do not think the changes made to the NFD release-1. - we are doing explorative research, right? Of course all naming issues can benefit from more discussions. - wonder if you would like to propose a specific time frame (i.e. next week, or longer term)? - it would be helpful if there are some inputs/reading/considerations over email before the call, so that people can think through first. > Among other things, typing components unless required by the protocol (as seems to be the case with the implicit hash) seems to run counter to the notion of name opaqueness, and there are some conflicts in the URI representation that need to be resolved. For any URI issues: Please let Alex and Junxiao know. for component typing: are you saying that we should allow name component typing? In any case, as soon as we can collect a list of technical questions, I can try scheduling a discussion. Lixia -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at cs.ucla.edu Sat Mar 22 18:24:00 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Sat, 22 Mar 2014 18:24:00 -0700 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> References: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> Message-ID: <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> Hi, just to reinforce, this will be needed at least as configuration parameter in future releases too as for as is key to be able to exploit unsolicited caching of content. As per our studies we relay on (unsolicited) cashed content between 25 and 50% of the times due to the dynamic partitioning nature of the vehicular network. If in the future this option will be no longer supported we may need to branch out the vehicular version; i find branching out a way much less elegant option compared to a simple configuration parameter. Best g. ========================== It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. - Leonardo da Vinci ========================== On Mar 22, 2014, at 12:11 PM, Alex Afanasyev wrote: > As Junxiao mentioned several time, first release WILL have unsolicited caching for LOCAL apps. > > --- > Alex > > On Mar 22, 2014, at 12:08 PM, Burke, Jeff wrote: > >> >> Just to close this thread: In the first release, NFD will not have unsolicited caching for local apps? >> >> So we need to transition the ndnrtc application and any others to have application-side buffers or use a repo? >> >> jeff >> >> From: Jeff Burke >> Date: Wed, 19 Mar 2014 14:33:20 +0000 >> To: Alex Afanasyev >> Cc: "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >> >> Hi Alex, >> Thanks for the reply. Comments below. >> Jeff >> >> From: Alex Afanasyev >> Date: Tue, 18 Mar 2014 23:39:14 -0700 >> To: Jeff Burke >> Cc: "bzhang at cs.arizona.edu" , Junxiao Shi , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >> >> Hi Jeff, >> >> Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. >> >> [jb] Yes, this is all that is needed. >> >> I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. >> >> [jb] Our plan is that browser integration will have two components: 1) native code that provides rtc core functions, hopefully in an add-on/extension; 2) javascript that handles as much as possible, including conference discovery, etc. Neither would be required to use a websockets proxy as they will have access to socket functions via the add-on/extension, though they might to make the js more reusable in other circumstances. We set browser integration aside to get the media handling working so don't know exactly how it's going to go yet. But all unsolicited caching would be "local" - either via a proxy as you mention or with an actual local daemon. >> >> In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. >> >> [jb] Ok. If unsolicited caching for local nodes will work, that's probably all that is needed. Later, we can either 1) provide a cache in the library; 2) create an authenticated mechanism for storing things at the local daemon as Ilya has mentioned; or 3) use a fast local repo. >> >> --- >> Alex >> >> On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: >> >>> Hi Beichuan, >>> >>> Thanks for the further explanation. >>> >>> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >>> >>> thanks, >>> Jeff >>> >>> >>> From: "bzhang at cs.arizona.edu" >>> Date: Tue, 18 Mar 2014 22:38:31 -0700 >>> To: Junxiao Shi >>> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>> >>> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >>> >>> What Junxiao said is probably what the first release of NFD will have. >>> >>> Beichuan >>> >>> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >>> >>>> Hi Peter >>>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>>> You should insert Data into a repository instead. >>>> Yours, Junxiao >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Ndn-app mailing list > Ndn-app at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app From gpau at cs.ucla.edu Sat Mar 22 18:25:53 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Sat, 22 Mar 2014 18:25:53 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: Message-ID: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> Hi All, I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. thanks Giovanni. On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: > > Hi, > > I agree with Alex - I am not suggesting either infinite lifetimes or hard state. > > What I am suggesting is: > > 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. > > 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. > > I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. > > Thanks, > Jeff > > > From: Alex Afanasyev > Date: Fri, 21 Mar 2014 09:57:56 -0700 > To: Jeff Burke > Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Interest lifetime limit > > Hi all, > > In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. > > Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. > > As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). > > In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). > > --- > Alex > > > On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: > >> >> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >> >> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >> >> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >> >> Jeff >> >> From: Junxiao Shi >> Date: Thu, 20 Mar 2014 22:17:37 -0700 >> To: Giovanni Pau >> Cc: Jeff Burke , , Lixia Zhang >> Subject: Re: [Nfd-dev] Interest lifetime limit >> >> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >> * Protocol should not set an upper bound on InterestLifetime. >> * Setting a *practical* upper bound is a policy issue, configurable by operator. >> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >> * Most applications don't need any special lifetime. >> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >> * Forwarder cannot afford long lifetime because PIT entries consume memory. >> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >> Yours, Junxiao >> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >> > >> > Junxiao, >> > >> > sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >> > >> > Thanks >> > g. >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From alexander.afanasyev at ucla.edu Sat Mar 22 18:28:11 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sat, 22 Mar 2014 18:28:11 -0700 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> References: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> Message-ID: Hi Giovanni, Sure, this is definitely a good idea. My hope is that we will have a number of alternative implementations of ContentStore (and may be other structures) in the same code, while either at compilation (I prefer this) or run time (would be slower... but easier to manage) "user" will be able to select appropriate version (like it is in ndnSIM). --- Alex On Mar 22, 2014, at 6:24 PM, Giovanni Pau wrote: > Hi, > > just to reinforce, this will be needed at least as configuration parameter in future releases too as for as is key to be able to exploit unsolicited caching of content. As per our studies we relay on (unsolicited) cashed content between 25 and 50% of the times due to the dynamic partitioning nature of the vehicular network. If in the future this option will be no longer supported we may need to branch out the vehicular version; i find branching out a way much less elegant option compared to a simple configuration parameter. > > Best > g. > > > > ========================== > It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. > > - Leonardo da Vinci > ========================== > > > > > On Mar 22, 2014, at 12:11 PM, Alex Afanasyev wrote: > >> As Junxiao mentioned several time, first release WILL have unsolicited caching for LOCAL apps. >> >> --- >> Alex >> >> On Mar 22, 2014, at 12:08 PM, Burke, Jeff wrote: >> >>> >>> Just to close this thread: In the first release, NFD will not have unsolicited caching for local apps? >>> >>> So we need to transition the ndnrtc application and any others to have application-side buffers or use a repo? >>> >>> jeff >>> >>> From: Jeff Burke >>> Date: Wed, 19 Mar 2014 14:33:20 +0000 >>> To: Alex Afanasyev >>> Cc: "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" >>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>> >>> Hi Alex, >>> Thanks for the reply. Comments below. >>> Jeff >>> >>> From: Alex Afanasyev >>> Date: Tue, 18 Mar 2014 23:39:14 -0700 >>> To: Jeff Burke >>> Cc: "bzhang at cs.arizona.edu" , Junxiao Shi , "ndn-app at lists.cs.ucla.edu" , "Gusev, Peter" , "nfd-dev at lists.cs.ucla.edu" >>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>> >>> Hi Jeff, >>> >>> Even in the first release, there is no problem with unsolicited data caching for **local** faces (unix socket and tcp connection to localhost address), which should be sufficient for any stand-alone application, including ndnrtc. >>> >>> [jb] Yes, this is all that is needed. >>> >>> I'm kind of blanking right now on how ndnrtc relates to browser (is it inside browser and can do local connection or javascript will to websockets for that). If it is websocket, then the websocket "proxy" (and/or special face inside NFD---just in case, this will not be in the first release) can be made "local", so unsolicited caching can be enabled. >>> >>> [jb] Our plan is that browser integration will have two components: 1) native code that provides rtc core functions, hopefully in an add-on/extension; 2) javascript that handles as much as possible, including conference discovery, etc. Neither would be required to use a websockets proxy as they will have access to socket functions via the add-on/extension, though they might to make the js more reusable in other circumstances. We set browser integration aside to get the media handling working so don't know exactly how it's going to go yet. But all unsolicited caching would be "local" - either via a proxy as you mention or with an actual local daemon. >>> >>> In any case, as Beichuan pointed out, Junxiao described the behavior that will be in the first release, which will have exactly one hard-coded caching policy for the content stored. Next releases would have policies that can be adjusted per node. >>> >>> [jb] Ok. If unsolicited caching for local nodes will work, that's probably all that is needed. Later, we can either 1) provide a cache in the library; 2) create an authenticated mechanism for storing things at the local daemon as Ilya has mentioned; or 3) use a fast local repo. >>> >>> --- >>> Alex >>> >>> On Mar 18, 2014, at 10:44 PM, Burke, Jeff wrote: >>> >>>> Hi Beichuan, >>>> >>>> Thanks for the further explanation. >>>> >>>> We would like to run the ndnrtc on NFD as an initial test ? should we look for this functionality in the repo or try to provide it in the library? (or both?) >>>> >>>> thanks, >>>> Jeff >>>> >>>> >>>> From: "bzhang at cs.arizona.edu" >>>> Date: Tue, 18 Mar 2014 22:38:31 -0700 >>>> To: Junxiao Shi >>>> Cc: "ndn-app at lists.cs.ucla.edu" , Peter Gusev , >>>> Subject: Re: [Nfd-dev] NDN-RTC poke Data to CS >>>> >>>> In my opinion, caching unsolicited data or not should be the choice of each individual node; nothing in the architecture or protocol prevents that. >>>> >>>> What Junxiao said is probably what the first release of NFD will have. >>>> >>>> Beichuan >>>> >>>> On Mar 18, 2014, at 7:57 PM, Junxiao Shi wrote: >>>> >>>>> Hi Peter >>>>> In seminar slides you mention that the RTC application in browser may poke Data to a remote forwarder. >>>>> I want to inform you that NFD will not admit any unsolicited Data from non-local face. NFD will admit unsolicited Data from local face, but they will be the first to get evicted when CS is full. >>>>> You should insert Data into a repository instead. >>>>> Yours, Junxiao >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> >>>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ >> Ndn-app mailing list >> Ndn-app at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app > From alexander.afanasyev at ucla.edu Sat Mar 22 18:33:28 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sat, 22 Mar 2014 18:33:28 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> Message-ID: <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. --- Alex On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: > Hi All, > > I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. > > thanks > Giovanni. > > > > On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: > >> >> Hi, >> >> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >> >> What I am suggesting is: >> >> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >> >> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >> >> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >> >> Thanks, >> Jeff >> >> >> From: Alex Afanasyev >> Date: Fri, 21 Mar 2014 09:57:56 -0700 >> To: Jeff Burke >> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Nfd-dev] Interest lifetime limit >> >> Hi all, >> >> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >> >> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >> >> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >> >> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >> >> --- >> Alex >> >> >> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >> >>> >>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>> >>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>> >>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>> >>> Jeff >>> >>> From: Junxiao Shi >>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>> To: Giovanni Pau >>> Cc: Jeff Burke , , Lixia Zhang >>> Subject: Re: [Nfd-dev] Interest lifetime limit >>> >>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>> * Protocol should not set an upper bound on InterestLifetime. >>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>> * Most applications don't need any special lifetime. >>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>> Yours, Junxiao >>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>> >>>> Junxiao, >>>> >>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>> >>>> Thanks >>>> g. >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> > From shijunxiao at email.arizona.edu Sat Mar 22 18:40:58 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 22 Mar 2014 18:40:58 -0700 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> References: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> Message-ID: Dear folks The design is: 1. Incoming Data pipeline decides whether a Data is unsolicited or not. 2. A function decides whether an unsolicited Data CAN be admitted. - Currently, this function returns true if incoming face is local. - Replacing this function allows for alternate behaviors. - This function SHOULD NOT be part of CS policy (which governs the replacement of cached Data). It is a policy in forwarding, not in CS. 3. If allowed by the function in step 2, the Data is given to CS. CS policy decides whether to admit the Data based on available space, and also decides which Data to evict in case CS is full. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpau at cs.ucla.edu Sat Mar 22 18:48:43 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Sat, 22 Mar 2014 18:48:43 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> Message-ID: In principle i agree for a default value of any time 32 sec are a bit short but if there is consensus is fine. In the vehicular case i would like to be able to issue an interest for something like /hazards/my-location/ahead of me/ and let it alive for at least several minutes, radio resources are scarce and renewing this every 30 sec appears to me a bit too much as you always want to know if there are hazards, or accidents.. That said, i agree for a reasonable default, /g. ========================== It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. - Leonardo da Vinci ========================== On Mar 22, 2014, at 6:33 PM, Alex Afanasyev wrote: > I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. > > As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. > > --- > Alex > > On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: > >> Hi All, >> >> I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. >> >> thanks >> Giovanni. >> >> >> >> On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: >> >>> >>> Hi, >>> >>> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >>> >>> What I am suggesting is: >>> >>> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >>> >>> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >>> >>> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >>> >>> Thanks, >>> Jeff >>> >>> >>> From: Alex Afanasyev >>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>> To: Jeff Burke >>> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >>> Subject: Re: [Nfd-dev] Interest lifetime limit >>> >>> Hi all, >>> >>> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >>> >>> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >>> >>> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >>> >>> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >>> >>> --- >>> Alex >>> >>> >>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >>> >>>> >>>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>>> >>>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>>> >>>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>>> >>>> Jeff >>>> >>>> From: Junxiao Shi >>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>> To: Giovanni Pau >>>> Cc: Jeff Burke , , Lixia Zhang >>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>> >>>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>>> * Protocol should not set an upper bound on InterestLifetime. >>>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>>> * Most applications don't need any special lifetime. >>>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>>> Yours, Junxiao >>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>> >>>>> Junxiao, >>>>> >>>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>>> >>>>> Thanks >>>>> g. >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >> > From alexander.afanasyev at ucla.edu Sat Mar 22 18:52:56 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sat, 22 Mar 2014 18:52:56 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> Message-ID: This case seem to be slightly different that a general case with Interests. I suspect, the intention is not send out Interests, but just keep PIT state locally alive, as long as the application alive. (Am I missing something here and this state is beyond just local app-local daemon?) For this case, I suspect, we can do something special. -- Alex On Mar 22, 2014, at 6:48 PM, Giovanni Pau wrote: > > In principle i agree for a default value of any time 32 sec are a bit short but if there is consensus is fine. In the vehicular case i would like to be able to issue an interest for something like /hazards/my-location/ahead of me/ and let it alive for at least several minutes, radio resources are scarce and renewing this every 30 sec appears to me a bit too much as you always want to know if there are hazards, or accidents.. > > That said, i agree for a reasonable default, > > > /g. > ========================== > It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. > > - Leonardo da Vinci > ========================== > > > > > On Mar 22, 2014, at 6:33 PM, Alex Afanasyev wrote: > >> I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. >> >> As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. >> >> --- >> Alex >> >> On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: >> >>> Hi All, >>> >>> I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. >>> >>> thanks >>> Giovanni. >>> >>> >>> >>> On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: >>> >>>> >>>> Hi, >>>> >>>> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >>>> >>>> What I am suggesting is: >>>> >>>> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >>>> >>>> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >>>> >>>> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >>>> >>>> Thanks, >>>> Jeff >>>> >>>> >>>> From: Alex Afanasyev >>>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>>> To: Jeff Burke >>>> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>> >>>> Hi all, >>>> >>>> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >>>> >>>> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >>>> >>>> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >>>> >>>> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >>>> >>>> --- >>>> Alex >>>> >>>> >>>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >>>> >>>>> >>>>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>>>> >>>>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>>>> >>>>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>>>> >>>>> Jeff >>>>> >>>>> From: Junxiao Shi >>>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>>> To: Giovanni Pau >>>>> Cc: Jeff Burke , , Lixia Zhang >>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>> >>>>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>>>> * Protocol should not set an upper bound on InterestLifetime. >>>>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>>>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>>>> * Most applications don't need any special lifetime. >>>>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>>>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>>>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>>>> Yours, Junxiao >>>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>>> >>>>>> Junxiao, >>>>>> >>>>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>>>> >>>>>> Thanks >>>>>> g. >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> >>> >> > From gpau at cs.ucla.edu Sat Mar 22 18:56:21 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Sat, 22 Mar 2014 18:56:21 -0700 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: References: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> Message-ID: <04DDB348-1A5F-4A52-9B61-4EA2C9274DF3@cs.ucla.edu> Hi All, Junxiao, thanks for the clarification, in such case we will just add our code on step 2. I?m just concerned about the maintenance of the code. I would avoid pretty much at all costs to have different separate code-trees this would make maintenance an hell. That is why i?m much more in favor of adding some complexity in the code to allow flexibility, this is for Caching unsolicited data as well as lifetime etc. In other words we may have the function there that depending on a number of parameters (i.e. the Face is V2V/V2I) deciders if unsolicited data is to be stored in the CS or not. I agree with the separate design from the CS this will allow a lot of experimenting with different CS strategies etc. Thanks g. On Mar 22, 2014, at 6:40 PM, Junxiao Shi wrote: > Dear folks > > The design is: > ? Incoming Data pipeline decides whether a Data is unsolicited or not. > ? A function decides whether an unsolicited Data CAN be admitted. > ? Currently, this function returns true if incoming face is local. > ? Replacing this function allows for alternate behaviors. > ? This function SHOULD NOT be part of CS policy (which governs the replacement of cached Data). It is a policy in forwarding, not in CS. > ? If allowed by the function in step 2, the Data is given to CS. > CS policy decides whether to admit the Data based on available space, and also decides which Data to evict in case CS is full. > > Yours, Junxiao > _______________________________________________ > Ndn-app mailing list > Ndn-app at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app From christos at cs.colostate.edu Sun Mar 23 06:16:43 2014 From: christos at cs.colostate.edu (Christos Papadopoulos) Date: Sun, 23 Mar 2014 07:16:43 -0600 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> Message-ID: <532EDEBB.5010509@cs.colostate.edu> Let me see if I can summarize (for my sake) and then ask a question: - Alex wants a defense mechanism in the PIT to guard against resource exhaustion from too many pending Interests (great to have such a mechanism, BTW). - Alex suggests a reasonably short Interest lifetime in the PIT. - Note, however, that this does not guard against deliberate attacks (or even nasty bugs such a congestion control scheme gone awry) - one can always send a flood of interests to overflow a PIT with any reasonable Interest lifetime. - As I see it, options are (a) set a short default as a first line defense and leave it at that for now, or (b) set a longer default but have a more sophisticated mechanism to purge PIT state when we hit the limit. - Along with Jeff and Giovanni, I do like option (b) with an operator configurable default. I also like Jeff's suggestion to pick a point value to mean "as long as you can". Local policies always prevail, of course, and these interests might be the first to be purged in case of overload. - I would like the default be the common case, which I think means the shorter value. - So my question to Alex is, do you think it is feasible to have some resource exhaustion defense mechanism in this version of NFD beyond a short Interest lifetime, and being mindful of the implementation effort at this stage, what would you suggest? Christos. On 03/22/2014 07:52 PM, Alex Afanasyev wrote: > This case seem to be slightly different that a general case with Interests. I suspect, the intention is not send out Interests, but just keep PIT state locally alive, as long as the application alive. (Am I missing something here and this state is beyond just local app-local daemon?) For this case, I suspect, we can do something special. > > -- > Alex > > On Mar 22, 2014, at 6:48 PM, Giovanni Pau wrote: > >> >> In principle i agree for a default value of any time 32 sec are a bit short but if there is consensus is fine. In the vehicular case i would like to be able to issue an interest for something like /hazards/my-location/ahead of me/ and let it alive for at least several minutes, radio resources are scarce and renewing this every 30 sec appears to me a bit too much as you always want to know if there are hazards, or accidents.. >> >> That said, i agree for a reasonable default, >> >> >> /g. >> ========================== >> It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. >> >> - Leonardo da Vinci >> ========================== >> >> >> >> >> On Mar 22, 2014, at 6:33 PM, Alex Afanasyev wrote: >> >>> I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. >>> >>> As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. >>> >>> --- >>> Alex >>> >>> On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: >>> >>>> Hi All, >>>> >>>> I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. >>>> >>>> thanks >>>> Giovanni. >>>> >>>> >>>> >>>> On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: >>>> >>>>> >>>>> Hi, >>>>> >>>>> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >>>>> >>>>> What I am suggesting is: >>>>> >>>>> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >>>>> >>>>> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >>>>> >>>>> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >>>>> >>>>> Thanks, >>>>> Jeff >>>>> >>>>> >>>>> From: Alex Afanasyev >>>>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>>>> To: Jeff Burke >>>>> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>> >>>>> Hi all, >>>>> >>>>> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >>>>> >>>>> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >>>>> >>>>> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >>>>> >>>>> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >>>>> >>>>> --- >>>>> Alex >>>>> >>>>> >>>>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >>>>> >>>>>> >>>>>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>>>>> >>>>>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>>>>> >>>>>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>>>>> >>>>>> Jeff >>>>>> >>>>>> From: Junxiao Shi >>>>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>>>> To: Giovanni Pau >>>>>> Cc: Jeff Burke , , Lixia Zhang >>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>> >>>>>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>>>>> * Protocol should not set an upper bound on InterestLifetime. >>>>>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>>>>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>>>>> * Most applications don't need any special lifetime. >>>>>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>>>>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>>>>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>>>>> Yours, Junxiao >>>>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>>>> >>>>>>> Junxiao, >>>>>>> >>>>>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>>>>> >>>>>>> Thanks >>>>>>> g. >>>>>> _______________________________________________ >>>>>> Nfd-dev mailing list >>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>> >>>> >>> >> > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From alexander.afanasyev at ucla.edu Sun Mar 23 10:19:23 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sun, 23 Mar 2014 10:19:23 -0700 Subject: [Nfd-dev] Config file for the library Message-ID: Hi guys, I just merged https://github.com/named-data/ndn-cpp-dev/commit/c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96 commit to ndn-cpp-dev library, which changes how the library selects which protocol to use to register prefixes and where to look for UNIX socket. NFD=1 and NRD=1 environmental variables are no longer used. All configuration should be done either using ~/.ndn/client.conf, @SYSCONFDIR@/ndn/client.conf (e.g., /usr/local/etc/ndn/client.conf), or /etc/ndn/client.conf The sample config file is in root folder of ndn-cpp-dev repo. Just in case, I'll post it here: ; "unix_socket" specifies the location of the NFD unix socket unix_socket=/var/run/nfd.sock ; "protocol" deteremines the protocol for prefix registration ; it has a value of: ; nfd-0.1 ; nrd-0.1 ; ndnd-tlv-0.7 ; ndnx-0.7 protocol=nrd-0.1 --- Alex From lixia at cs.ucla.edu Sun Mar 23 12:50:19 2014 From: lixia at cs.ucla.edu (Lixia Zhang) Date: Sun, 23 Mar 2014 12:50:19 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: <532EDEBB.5010509@cs.colostate.edu> References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> <532EDEBB.5010509@cs.colostate.edu> Message-ID: Below is a quick summary of the NFD chat this morning (Beichuan, Junxiao, Steve, Alex, me), hopefully it addresses Christos comments below. 1/ What's implemented in release-1 (the email exchanges showed some misunderstandings about this) - Each Interest carries a lifetime - One can control the upper bound of this lifetime through configuration. - This magic number Junxiao threw out, 2^15 msec, is meant to be the default max Interest lifetime, if no lifetime bound is configured. In short, release-1 does allow you to set any arbitrarily long Interest lifetime. 2/ Interest lifetime and soft-state Generally speaking, the desire of using arbitrarily long Interest lifetime does not seem to match the soft-state spirit of NDN. If some application really wants "long-lived Interest", one can probably support that desire through some library function which can periodically re-send the long-lived Interest to refresh the network PIT state. 3/ We need to understand the consequence of purging Interests: PIT state (breadcrumb trace along the path between consumer and where the data is) must be consistent to get the Data back to the consumer. If any individual node purges its PIT table entries, it destroys the breadcrumb trace and Data can't make back (Alex told the story that some ndnSIM players had done this PIT entry purge game, and cried "how come I got so many Interests timeouts?" ;-) 4/ The idea mentioned in 2/ (letting node refresh the Interests it wants to stay in PIT) also matches the scheme that Van talked a while back regarding the PIT size control: an upstream router should set an upper bound on the Interest lifetime received from a downstream router, saying "I'll keep all Interest you send to me no more than N seconds". So it is up to the downstream router to retransmit those Interests that did not get satisfied in N seconds. (One might think "what's the gain for purposely making downstream send the same Interests multiple times" -- that's my question before. Then I realized that this "retransmitting" gives the downstream router an opportunity to re-prioritize the Interests that it really wants to get) this may be the direction to consider for PIT size management instead of purging PIT entries, for wired Internet. I will work with our vehicle group to figure out whats best for vehicle situation (there is no persistent trace/path/neighbor in that case). Lixia On Mar 23, 2014, at 6:16 AM, Christos Papadopoulos wrote: > Let me see if I can summarize (for my sake) and then ask a question: > > - Alex wants a defense mechanism in the PIT to guard against resource exhaustion from too many pending Interests (great to have such a mechanism, BTW). > > - Alex suggests a reasonably short Interest lifetime in the PIT. > > - Note, however, that this does not guard against deliberate attacks (or even nasty bugs such a congestion control scheme gone awry) - one can always send a flood of interests to overflow a PIT with any reasonable Interest lifetime. > > - As I see it, options are (a) set a short default as a first line defense and leave it at that for now, or (b) set a longer default but have a more sophisticated mechanism to purge PIT state when we hit the limit. > > - Along with Jeff and Giovanni, I do like option (b) with an operator configurable default. I also like Jeff's suggestion to pick a point value to mean "as long as you can". Local policies always prevail, of course, and these interests might be the first to be purged in case of overload. > > - I would like the default be the common case, which I think means the shorter value. > > - So my question to Alex is, do you think it is feasible to have some resource exhaustion defense mechanism in this version of NFD beyond a short Interest lifetime, and being mindful of the implementation effort at this stage, what would you suggest? > > Christos. > > > On 03/22/2014 07:52 PM, Alex Afanasyev wrote: >> This case seem to be slightly different that a general case with Interests. I suspect, the intention is not send out Interests, but just keep PIT state locally alive, as long as the application alive. (Am I missing something here and this state is beyond just local app-local daemon?) For this case, I suspect, we can do something special. >> >> -- >> Alex >> >> On Mar 22, 2014, at 6:48 PM, Giovanni Pau wrote: >> >>> >>> In principle i agree for a default value of any time 32 sec are a bit short but if there is consensus is fine. In the vehicular case i would like to be able to issue an interest for something like /hazards/my-location/ahead of me/ and let it alive for at least several minutes, radio resources are scarce and renewing this every 30 sec appears to me a bit too much as you always want to know if there are hazards, or accidents.. >>> >>> That said, i agree for a reasonable default, >>> >>> >>> /g. >>> ========================== >>> It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. >>> >>> - Leonardo da Vinci >>> ========================== >>> >>> >>> >>> >>> On Mar 22, 2014, at 6:33 PM, Alex Afanasyev wrote: >>> >>>> I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. >>>> >>>> As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. >>>> >>>> --- >>>> Alex >>>> >>>> On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: >>>> >>>>> Hi All, >>>>> >>>>> I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. >>>>> >>>>> thanks >>>>> Giovanni. >>>>> >>>>> >>>>> >>>>> On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: >>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >>>>>> >>>>>> What I am suggesting is: >>>>>> >>>>>> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >>>>>> >>>>>> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >>>>>> >>>>>> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >>>>>> >>>>>> Thanks, >>>>>> Jeff >>>>>> >>>>>> >>>>>> From: Alex Afanasyev >>>>>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>>>>> To: Jeff Burke >>>>>> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>> >>>>>> Hi all, >>>>>> >>>>>> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >>>>>> >>>>>> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >>>>>> >>>>>> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >>>>>> >>>>>> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >>>>>> >>>>>> --- >>>>>> Alex >>>>>> >>>>>> >>>>>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >>>>>> >>>>>>> >>>>>>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>>>>>> >>>>>>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>>>>>> >>>>>>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>>>>>> >>>>>>> Jeff >>>>>>> >>>>>>> From: Junxiao Shi >>>>>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>>>>> To: Giovanni Pau >>>>>>> Cc: Jeff Burke , , Lixia Zhang >>>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>>> >>>>>>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>>>>>> * Protocol should not set an upper bound on InterestLifetime. >>>>>>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>>>>>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>>>>>> * Most applications don't need any special lifetime. >>>>>>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>>>>>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>>>>>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>>>>>> Yours, Junxiao >>>>>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>>>>> >>>>>>>> Junxiao, >>>>>>>> >>>>>>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>>>>>> >>>>>>>> Thanks >>>>>>>> g. >>>>>>> _______________________________________________ >>>>>>> Nfd-dev mailing list >>>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>>> >>>>> >>>> >>> >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From oran at cisco.com Sun Mar 23 13:05:04 2014 From: oran at cisco.com (Dave Oran (oran)) Date: Sun, 23 Mar 2014 20:05:04 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: References: <937675BB-8D58-4F83-9935-8BB45379DEBB@ucla.edu> <7D13FB91-82DF-4BE2-81D4-81CEF6F6E4AA@cs.ucla.edu> Message-ID: <60339CCA-9D28-463F-B2B6-3881C59182F4@cisco.com> While pushing data out into a cache isn?t necessarily dangerous if the cache is *truly* local, I am frankly quite nervous about this as a precedent. Why? Because ?local? is an incredibly slippery concept which ought to be completely specified before going down this path. At a minimum, I?d suggest that ?local? not mean ?the face is on the same box as the app?, but that either: a) the cache and the app are in exactly the same security container or b) the cache and app are mutually authenticated by means outside of NDN, and that the cache is protected by authorization machinery against pollution or poisoning by an application. I?ll also note that if the goal here is to protect against an app that produces data and then the box it?s running on crashes or gets partitioned form the network, the approach of local faces won?t do the trick. It may however permit data to survive an application crash or exit. Clearly the alternative of having a fast and robust repo is superior to opening up the can of worms above. As a general comment, I?m detecting a bit more expediency here than I?m comfortable with. However, if we really need to seed caches, let me suggest we step back and try to design an alternative approach. If the goal is simply to ensure data is available in advance of interests arriving to either reduce delay or provide robustness against app crash or box crash or network partition (I suspect the vehicular guys might be the ones interested in partition) there are alternatives that might work a whole lot better. One that occurs to me on just a few minutes thought is to issue an interest for the data at the same time you publish the data, and have machinery to route the interest explicitly to the cache you want to fill. I?ll point out that explicit interest routing might have other uses as well. Just thinking out loud here. On Mar 22, 2014, at 9:40 PM, Junxiao Shi wrote: > Dear folks > > The design is: > Incoming Data pipeline decides whether a Data is unsolicited or not. > A function decides whether an unsolicited Data CAN be admitted. > Currently, this function returns true if incoming face is local. > Replacing this function allows for alternate behaviors. > This function SHOULD NOT be part of CS policy (which governs the replacement of cached Data). It is a policy in forwarding, not in CS. > If allowed by the function in step 2, the Data is given to CS. > CS policy decides whether to admit the Data based on available space, and also decides which Data to evict in case CS is full. > > Yours, Junxiao > _______________________________________________ > Ndn-app mailing list > Ndn-app at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From gpau at cs.ucla.edu Sun Mar 23 17:54:32 2014 From: gpau at cs.ucla.edu (Giovanni Pau) Date: Sun, 23 Mar 2014 17:54:32 -0700 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: References: <47EE9522-4406-4045-9383-5FFF2552B1DF@cs.ucla.edu> <43D7EB17-8383-4D30-8363-BBFEC74C61A9@ucla.edu> <532EDEBB.5010509@cs.colostate.edu> Message-ID: <8B202BD5-A1C2-4BCC-8D17-1D5D8CA71251@cs.ucla.edu> On Mar 23, 2014, at 12:50 PM, Lixia Zhang wrote: > Below is a quick summary of the NFD chat this morning (Beichuan, Junxiao, Steve, Alex, me), hopefully it addresses Christos comments below. > > 1/ What's implemented in release-1 (the email exchanges showed some misunderstandings about this) > - Each Interest carries a lifetime > - One can control the upper bound of this lifetime > through configuration. > - This magic number Junxiao threw out, 2^15 msec, is meant to be > the default max Interest lifetime, if no lifetime bound is > configured. > > In short, release-1 does allow you to set any arbitrarily long Interest lifetime. >>> It sounds good this will allow to understand what can happen with very long interests > > 2/ Interest lifetime and soft-state > Generally speaking, the desire of using arbitrarily long Interest lifetime does not seem to match the soft-state spirit of NDN. > > If some application really wants "long-lived Interest", one can probably support that desire through some library function which can periodically re-send the long-lived Interest to refresh the network PIT state. My only concern (and ignorance as we did not check the issue thoroughly yet) is that given a MAX_LIfetime for the interest in our VNDN case we may result flooding the network, especially in dense cases such as Wilshire at 4.30 pm. > > 3/ We need to understand the consequence of purging Interests: PIT state (breadcrumb trace along the path between consumer and where the data is) must be consistent to get the Data back to the consumer. > If any individual node purges its PIT table entries, it destroys the breadcrumb trace and Data can't make back > (Alex told the story that some ndnSIM players had done this PIT entry purge game, and cried "how come I got so many Interests timeouts?" ;-) I fully agree with 3. we need to understand the consequences of this, very deeply, once ICN deadline is over i may try to evaluate the effect of 1min or 5 min interest (that is the order i?m thinking) on very dense situations. > > 4/ The idea mentioned in 2/ (letting node refresh the Interests it wants to stay in PIT) also matches the scheme that Van talked a while back regarding the PIT size control: an upstream router should set an upper bound on the Interest lifetime received from a downstream router, saying "I'll keep all Interest you send to me no more than N seconds". So it is up to the downstream router to retransmit those Interests that did not get satisfied in N seconds. > > (One might think "what's the gain for purposely making downstream send the same Interests multiple times" -- that's my question before. Then I realized that this "retransmitting" gives the downstream router an opportunity to re-prioritize the Interests that it really wants to get) Agree on 4, sounds reasonable though the optimal N may vary a lot. > > this may be the direction to consider for PIT size management instead of purging PIT entries, for wired Internet. I will work with our vehicle group to figure out whats best for vehicle situation (there is no persistent trace/path/neighbor in that case). Fully agree that the breadcumbs are not there as mobility change and in this case is hard to understand what will happen without due simulations. One consequence that comes to my mind is that for a while there may be node with the interest pending but no longer in the path towards the consumer. Waiting for you In Paris, will be a lot of Fun ;) !!! > > Lixia > > > On Mar 23, 2014, at 6:16 AM, Christos Papadopoulos wrote: > >> Let me see if I can summarize (for my sake) and then ask a question: >> >> - Alex wants a defense mechanism in the PIT to guard against resource exhaustion from too many pending Interests (great to have such a mechanism, BTW). >> >> - Alex suggests a reasonably short Interest lifetime in the PIT. >> >> - Note, however, that this does not guard against deliberate attacks (or even nasty bugs such a congestion control scheme gone awry) - one can always send a flood of interests to overflow a PIT with any reasonable Interest lifetime. >> >> - As I see it, options are (a) set a short default as a first line defense and leave it at that for now, or (b) set a longer default but have a more sophisticated mechanism to purge PIT state when we hit the limit. >> >> - Along with Jeff and Giovanni, I do like option (b) with an operator configurable default. I also like Jeff's suggestion to pick a point value to mean "as long as you can". Local policies always prevail, of course, and these interests might be the first to be purged in case of overload. >> >> - I would like the default be the common case, which I think means the shorter value. >> >> - So my question to Alex is, do you think it is feasible to have some resource exhaustion defense mechanism in this version of NFD beyond a short Interest lifetime, and being mindful of the implementation effort at this stage, what would you suggest? >> >> Christos. >> >> >> On 03/22/2014 07:52 PM, Alex Afanasyev wrote: >>> This case seem to be slightly different that a general case with Interests. I suspect, the intention is not send out Interests, but just keep PIT state locally alive, as long as the application alive. (Am I missing something here and this state is beyond just local app-local daemon?) For this case, I suspect, we can do something special. >>> >>> -- >>> Alex >>> >>> On Mar 22, 2014, at 6:48 PM, Giovanni Pau wrote: >>> >>>> >>>> In principle i agree for a default value of any time 32 sec are a bit short but if there is consensus is fine. In the vehicular case i would like to be able to issue an interest for something like /hazards/my-location/ahead of me/ and let it alive for at least several minutes, radio resources are scarce and renewing this every 30 sec appears to me a bit too much as you always want to know if there are hazards, or accidents.. >>>> >>>> That said, i agree for a reasonable default, >>>> >>>> >>>> /g. >>>> ========================== >>>> It had long since come to my attention that people of accomplishment rarely sat back and let things happen to them. They went out and happened to things. >>>> >>>> - Leonardo da Vinci >>>> ========================== >>>> >>>> >>>> >>>> >>>> On Mar 22, 2014, at 6:33 PM, Alex Afanasyev wrote: >>>> >>>>> I agree, but want to bring out the problem of soft state again. Why would routers want to keep PIT state for a prolonged period of time if it has no idea if the downstream still has interest in the data. >>>>> >>>>> As for (1). Data type used for lifetime allows expressing time durations from 1 milliseconds to billion years and more (it is 2^64 milliseconds), so it is really not a "practical" bound. >>>>> >>>>> --- >>>>> Alex >>>>> >>>>> On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> I agree in full with Jeff arguments below that actually summarize my points even better than what i did myself. At this moment we know to little and bounding ourselves is not a good idea. Also in some environment may be smart to have relatively long time interest and have a smart Pit cleanup strategy. >>>>>> >>>>>> thanks >>>>>> Giovanni. >>>>>> >>>>>> >>>>>> >>>>>> On Mar 22, 2014, at 10:47 AM, Burke, Jeff wrote: >>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I agree with Alex - I am not suggesting either infinite lifetimes or hard state. >>>>>>> >>>>>>> What I am suggesting is: >>>>>>> >>>>>>> 1) We don't know enough to set a practical bound on interest lifetime, so let's leave the bound set to limit of the data type used for storing it (or one less than that), until we have some operational experience with what a reasonable value would be. If it is to be operator-configurable anyway, this could be set in a configuration file that is easy to modify in future distributions or for particular installations. >>>>>>> >>>>>>> 2) Reserve the maximum int/long value to correspond to "as long as the forwarder is willing to hang on to the interest" - this is not infinite lifetime but does leave the possibility for long-lived interest support in certain deployments, or with deployment-specific PIT cleanup strategies. >>>>>>> >>>>>>> I realize there is a practical concern but this seems related to the current state of the implementation - If the worry with these is related to not yet having a PIT cleanup mechanism, I would suggest that this is a basic feature that should be incorporated into the initial NFD release. (Same with content store and FIB limits / cleanup.) Even at this stage, busted or intentionally aggressive apps should not be able to crash the forwarder by filling these tables - this could happen even with relatively low maximum lifetimes. Cleanup info also needs to be logged and someday available through the instrumentation mechanisms; this has already come up in Peter's debugging of ccnx?caused delays in ndnrtc packet flows. >>>>>>> >>>>>>> Thanks, >>>>>>> Jeff >>>>>>> >>>>>>> >>>>>>> From: Alex Afanasyev >>>>>>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>>>>>> To: Jeff Burke >>>>>>> Cc: Junxiao Shi , Giovanni Pau , "nfd-dev at lists.cs.ucla.edu" >>>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> In my opinion, not having the limit and considering that one can set infinite interest lifetime is not entirely correct. Isn't the whole point of the Interest/Data exchange is to provide flow balance? And when you separate Interest from Data by a significant portion of time, I don't see how it works out to the flow balance. >>>>>>> >>>>>>> Interests are generally soft state. The client issues Interest and expects data. Within reasonable time interval, routers expect that the client is still down there and the network topology didn't change, so the response would reach. When we are getting to "unlimited" lifetimes, we are going towards "hard" state on routers, without any guarantee that the client is still alive or that the network hasn't changed. >>>>>>> >>>>>>> As I remember, Van was always saying that Interest should be a bilateral agreement between two neighbors. If downstream is still interested, it should re-express its interests. This by definition assumes finite lifetimes (either global maximum or neighbors explicitly communicate their maximum). >>>>>>> >>>>>>> In any case. The main reason I asked this question is that I have a desire to provide a basic protection against abuse of the NDN routers, at least in the testbed environment. If we don't do it in a reasonable way, anybody can send a bunch of interests with huge lifetimes and then just go home, while network will suffer until it is rebooted (or we have a reasonable mechanism PIT cleanup implemented). >>>>>>> >>>>>>> --- >>>>>>> Alex >>>>>>> >>>>>>> >>>>>>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff wrote: >>>>>>> >>>>>>>> >>>>>>>> If it is going to be operator configurable, perhaps we can leave the practical limit set to the theoretical limit for research versions of NFD? I don't think we have enough experience with the tradeoffs you describe to pick an upper bound at this time. >>>>>>>> >>>>>>>> In my understanding, there is no real cost to the forwarder for long-lifetime interests, because it will need a mechanism to drop old pending interests when the PIT is full anyway. Unless forwarder PIT behavior is controlled in some special way by specific operators on behalf of local applications (as it might be in Giovanni's vehicular apps), the burden is always on the application to refresh the interest with an update period that is no longer than the maximum tolerable delay for a response to the Interest, because there are no guarantees on what stays in the PIT. >>>>>>>> >>>>>>>> Further, are we sure that long-lived interests might not be common in some applications? For example, if an application checks for automatic 'software updates' by issuing an interest every hour, what does that application have to lose by setting the Interest lifetime to one hour, even if it is not guaranteed to persist? >>>>>>>> >>>>>>>> Jeff >>>>>>>> >>>>>>>> From: Junxiao Shi >>>>>>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>>>>>> To: Giovanni Pau >>>>>>>> Cc: Jeff Burke , , Lixia Zhang >>>>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>>>> >>>>>>>> 20140114 meeting discussed InterestLifetime upper bound. Van's idea is: >>>>>>>> * Protocol should not set an upper bound on InterestLifetime. >>>>>>>> * Setting a *practical* upper bound is a policy issue, configurable by operator. >>>>>>>> My proposal of using 32768ms as the default upper bound is completely unrelated to "2 octets". Its reason is: >>>>>>>> * Most applications don't need any special lifetime. >>>>>>>> * NFD Notification mechanism is more efficient if longer lifetime can be used. Push applications also desire a long lifetime. >>>>>>>> * Forwarder cannot afford long lifetime because PIT entries consume memory. >>>>>>>> * Trade-off between the need of push application and the forwarder state cost leads to 32768ms default upper bound. >>>>>>>> Yours, Junxiao >>>>>>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>>>>>> >>>>>>>>> Junxiao, >>>>>>>>> >>>>>>>>> sorry i can?t get it, if we have 64bits, then why we need to bound it to a 16bit value? I agree is better to measure in ms rather than sec, but yet i do not understand the need to bound. I agree with jeff on the long timed interests, in our case such as an interest for a road-hazard in the direction of traveling. >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> g. >>>>>>>> _______________________________________________ >>>>>>>> Nfd-dev mailing list >>>>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From lanwang at memphis.edu Mon Mar 24 06:25:46 2014 From: lanwang at memphis.edu (Lan Wang (lanwang)) Date: Mon, 24 Mar 2014 13:25:46 +0000 Subject: [Nfd-dev] Config file for the library Message-ID: <91wsttnbyltol8lk8laf9uft.1395667539318@email.android.com> What happens when the protocol is ndnd-tlv-0.7 or ndnx-0.7? Lan -------- Original message -------- From: Alex Afanasyev Date: 03/23/2014 12:19 PM (GMT-06:00) To: "" Subject: [Nfd-dev] Config file for the library Hi guys, I just merged https://github.com/named-data/ndn-cpp-dev/commit/c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96 commit to ndn-cpp-dev library, which changes how the library selects which protocol to use to register prefixes and where to look for UNIX socket. NFD=1 and NRD=1 environmental variables are no longer used. All configuration should be done either using ~/.ndn/client.conf, @SYSCONFDIR@/ndn/client.conf (e.g., /usr/local/etc/ndn/client.conf), or /etc/ndn/client.conf The sample config file is in root folder of ndn-cpp-dev repo. Just in case, I'll post it here: ; "unix_socket" specifies the location of the NFD unix socket unix_socket=/var/run/nfd.sock ; "protocol" deteremines the protocol for prefix registration ; it has a value of: ; nfd-0.1 ; nrd-0.1 ; ndnd-tlv-0.7 ; ndnx-0.7 protocol=nrd-0.1 --- Alex _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dibenede at cs.colostate.edu Mon Mar 24 06:40:06 2014 From: dibenede at cs.colostate.edu (Steve DiBenedetto) Date: Mon, 24 Mar 2014 07:40:06 -0600 Subject: [Nfd-dev] Config file for the library In-Reply-To: <91wsttnbyltol8lk8laf9uft.1395667539318@email.android.com> References: <91wsttnbyltol8lk8laf9uft.1395667539318@email.android.com> Message-ID: "protocol" determines the Controller that should be used by faces (see Face::construct()). ndnd-tlv will use a ndnd::Controller (instead of nrd:: or nfd::). ndnx-0.7 will throw a Face::Error exception because it is unsupported for ndn-cpp-dev. On Mon, Mar 24, 2014 at 7:25 AM, Lan Wang (lanwang) wrote: > What happens when the protocol is ndnd-tlv-0.7 > or ndnx-0.7? > > Lan > > > > -------- Original message -------- > From: Alex Afanasyev > Date: 03/23/2014 12:19 PM (GMT-06:00) > To: "" > Subject: [Nfd-dev] Config file for the library > > > Hi guys, > > I just merged > https://github.com/named-data/ndn-cpp-dev/commit/c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96commit to ndn-cpp-dev library, which changes how the library selects which > protocol to use to register prefixes and where to look for UNIX socket. > > NFD=1 and NRD=1 environmental variables are no longer used. All > configuration should be done either using ~/.ndn/client.conf, @SYSCONFDIR@/ndn/client.conf > (e.g., /usr/local/etc/ndn/client.conf), or /etc/ndn/client.conf > > The sample config file is in root folder of ndn-cpp-dev repo. Just in > case, I'll post it here: > > ; "unix_socket" specifies the location of the NFD unix socket > unix_socket=/var/run/nfd.sock > > ; "protocol" deteremines the protocol for prefix registration > ; it has a value of: > ; nfd-0.1 > ; nrd-0.1 > ; ndnd-tlv-0.7 > ; ndnx-0.7 > protocol=nrd-0.1 > > --- > Alex > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Mon Mar 24 10:27:14 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Mon, 24 Mar 2014 12:27:14 -0500 Subject: [Nfd-dev] Config file for the library In-Reply-To: References: <91wsttnbyltol8lk8laf9uft.1395667539318@email.android.com> Message-ID: Hi Alex, I am not able to start nrd and getting this error: ERROR: Cannot create controller for unsupported protocol "nrd-0.1" This is what I am doing: On Terminal 1: ~$ sudo NFD_LOG=all nfd DEBUG: [NameTree] lookup / DEBUG: [NameTree] insert / DEBUG: [NameTree] Name / hash value = 2654435816 location = 488 On Terminal 2: ~$ cat ~/.ndn/client.conf unix_socket=/var/run/nfd.sock protocol=nrd-0.1 ~$ nrd $ nrd ERROR: Cannot create controller for unsupported protocol "nrd-0.1" Commit details NFD: ba7490517d1f4e9b699d7398788db03e1ffaeacc ndn-cpp-dev: c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96 NRD: ea56c614fe3065d3eee933c49a916ce48feae399 Any idea, what's going wrong here. Regards, Obaid On Mon, Mar 24, 2014 at 8:40 AM, Steve DiBenedetto < dibenede at cs.colostate.edu> wrote: > "protocol" determines the Controller that should be used by faces (see > Face::construct()). ndnd-tlv will use a ndnd::Controller (instead of nrd:: > or nfd::). ndnx-0.7 will throw a Face::Error exception because it is > unsupported for ndn-cpp-dev. > > > On Mon, Mar 24, 2014 at 7:25 AM, Lan Wang (lanwang) wrote: > >> What happens when the protocol is ndnd-tlv-0.7 >> or ndnx-0.7? >> >> Lan >> >> >> >> -------- Original message -------- >> From: Alex Afanasyev >> Date: 03/23/2014 12:19 PM (GMT-06:00) >> To: "" >> Subject: [Nfd-dev] Config file for the library >> >> >> Hi guys, >> >> I just merged >> https://github.com/named-data/ndn-cpp-dev/commit/c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96commit to ndn-cpp-dev library, which changes how the library selects which >> protocol to use to register prefixes and where to look for UNIX socket. >> >> NFD=1 and NRD=1 environmental variables are no longer used. All >> configuration should be done either using ~/.ndn/client.conf, @SYSCONFDIR@/ndn/client.conf >> (e.g., /usr/local/etc/ndn/client.conf), or /etc/ndn/client.conf >> >> The sample config file is in root folder of ndn-cpp-dev repo. Just in >> case, I'll post it here: >> >> ; "unix_socket" specifies the location of the NFD unix socket >> unix_socket=/var/run/nfd.sock >> >> ; "protocol" deteremines the protocol for prefix registration >> ; it has a value of: >> ; nfd-0.1 >> ; nrd-0.1 >> ; ndnd-tlv-0.7 >> ; ndnx-0.7 >> protocol=nrd-0.1 >> >> --- >> Alex >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dibenede at cs.colostate.edu Mon Mar 24 10:31:51 2014 From: dibenede at cs.colostate.edu (Steve DiBenedetto) Date: Mon, 24 Mar 2014 11:31:51 -0600 Subject: [Nfd-dev] Config file for the library In-Reply-To: References: <91wsttnbyltol8lk8laf9uft.1395667539318@email.android.com> Message-ID: There's a missing "else" in the code and the fix is in code review: http://gerrit.named-data.net/#/c/559/ . Sorry about that. On Mon, Mar 24, 2014 at 11:27 AM, Syed Obaid Amin wrote: > Hi Alex, > > I am not able to start nrd and getting this error: > ERROR: Cannot create controller for unsupported protocol "nrd-0.1" > > This is what I am doing: > On Terminal 1: > ~$ sudo NFD_LOG=all nfd > DEBUG: [NameTree] lookup / > DEBUG: [NameTree] insert / > DEBUG: [NameTree] Name / hash value = 2654435816 location = 488 > > > > On Terminal 2: > ~$ cat ~/.ndn/client.conf > unix_socket=/var/run/nfd.sock > protocol=nrd-0.1 > > ~$ nrd > $ nrd > ERROR: Cannot create controller for unsupported protocol "nrd-0.1" > > > Commit details > NFD: ba7490517d1f4e9b699d7398788db03e1ffaeacc > ndn-cpp-dev: c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96 > NRD: ea56c614fe3065d3eee933c49a916ce48feae399 > > Any idea, what's going wrong here. > > Regards, > Obaid > > > On Mon, Mar 24, 2014 at 8:40 AM, Steve DiBenedetto < > dibenede at cs.colostate.edu> wrote: > >> "protocol" determines the Controller that should be used by faces (see >> Face::construct()). ndnd-tlv will use a ndnd::Controller (instead of nrd:: >> or nfd::). ndnx-0.7 will throw a Face::Error exception because it is >> unsupported for ndn-cpp-dev. >> >> >> On Mon, Mar 24, 2014 at 7:25 AM, Lan Wang (lanwang) wrote: >> >>> What happens when the protocol is ndnd-tlv-0.7 >>> or ndnx-0.7? >>> >>> Lan >>> >>> >>> >>> -------- Original message -------- >>> From: Alex Afanasyev >>> Date: 03/23/2014 12:19 PM (GMT-06:00) >>> To: "" >>> Subject: [Nfd-dev] Config file for the library >>> >>> >>> Hi guys, >>> >>> I just merged >>> https://github.com/named-data/ndn-cpp-dev/commit/c07b3a2fabc25bc6ad9d3bf9ffc9df7bf994dd96commit to ndn-cpp-dev library, which changes how the library selects which >>> protocol to use to register prefixes and where to look for UNIX socket. >>> >>> NFD=1 and NRD=1 environmental variables are no longer used. All >>> configuration should be done either using ~/.ndn/client.conf, @SYSCONFDIR@/ndn/client.conf >>> (e.g., /usr/local/etc/ndn/client.conf), or /etc/ndn/client.conf >>> >>> The sample config file is in root folder of ndn-cpp-dev repo. Just in >>> case, I'll post it here: >>> >>> ; "unix_socket" specifies the location of the NFD unix socket >>> unix_socket=/var/run/nfd.sock >>> >>> ; "protocol" deteremines the protocol for prefix registration >>> ; it has a value of: >>> ; nfd-0.1 >>> ; nrd-0.1 >>> ; ndnd-tlv-0.7 >>> ; ndnx-0.7 >>> protocol=nrd-0.1 >>> >>> --- >>> Alex >>> >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Tue Mar 25 14:52:46 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Tue, 25 Mar 2014 21:52:46 +0000 Subject: [Nfd-dev] Interest lifetime limit In-Reply-To: Message-ID: On 3/23/14, 12:50 PM, "Lixia Zhang" wrote: >Below is a quick summary of the NFD chat this morning (Beichuan, Junxiao, >Steve, Alex, me), hopefully it addresses Christos comments below. > >1/ What's implemented in release-1 (the email exchanges showed some >misunderstandings about this) >- Each Interest carries a lifetime >- One can control the upper bound of this lifetime > through configuration. >- This magic number Junxiao threw out, 2^15 msec, is meant to be > the default max Interest lifetime, if no lifetime bound is > configured. > >In short, release-1 does allow you to set any arbitrarily long Interest >lifetime. [jb] This all sounds good. My point was just that 2^15 msec seems arbitrary, and if we're going to pick something arbitrarily at this stage, I'd suggest it be longer. :) > >2/ Interest lifetime and soft-state >Generally speaking, the desire of using arbitrarily long Interest >lifetime does not seem to match the soft-state spirit of NDN. > >If some application really wants "long-lived Interest", one can probably >support that desire through some library function which can periodically >re-send the long-lived Interest to refresh the network PIT state. [jb] Yes, I agree with that. But what we haven't established is what is *long*? (i.e., what is a reasonable refresh rate) I am not exactly sure how to do that yet. > >3/ We need to understand the consequence of purging Interests: PIT state >(breadcrumb trace along the path between consumer and where the data is) >must be consistent to get the Data back to the consumer. >If any individual node purges its PIT table entries, it destroys the >breadcrumb trace and Data can't make back >(Alex told the story that some ndnSIM players had done this PIT entry >purge game, and cried "how come I got so many Interests timeouts?" ;-) [jb] This seems related to what is a reasonable interest lifetime. Could NACKs be sent back at the time of a purge? > >4/ The idea mentioned in 2/ (letting node refresh the Interests it wants >to stay in PIT) also matches the scheme that Van talked a while back >regarding the PIT size control: an upstream router should set an upper >bound on the Interest lifetime received from a downstream router, saying >"I'll keep all Interest you send to me no more than N seconds". So it is >up to the downstream router to retransmit those Interests that did not >get satisfied in N seconds. [jb] This suggests that in the future there may be a divergence between the lifetime that application wants and what the libraries underneath do to make sure the interest persists. > >(One might think "what's the gain for purposely making downstream send >the same Interests multiple times" -- that's my question before. Then I >realized that this "retransmitting" gives the downstream router an >opportunity to re-prioritize the Interests that it really wants to get) [jb] There seems a fine line between "expressing priority" reissuing interests and interesting flooding, but this makes sense me. Another perspective from timing critical applications is for the app to issue interests at the maximum delay that it can tolerate receiving a response. (Well, or half that, we assume the data returns at roughly the same rate.) > >this may be the direction to consider for PIT size management instead of >purging PIT entries, for wired Internet. I will work with our vehicle >group to figure out whats best for vehicle situation (there is no >persistent trace/path/neighbor in that case). > >Lixia > > >On Mar 23, 2014, at 6:16 AM, Christos Papadopoulos > wrote: > >> Let me see if I can summarize (for my sake) and then ask a question: >> >> - Alex wants a defense mechanism in the PIT to guard against resource >>exhaustion from too many pending Interests (great to have such a >>mechanism, BTW). >> >> - Alex suggests a reasonably short Interest lifetime in the PIT. >> >> - Note, however, that this does not guard against deliberate attacks >>(or even nasty bugs such a congestion control scheme gone awry) - one >>can always send a flood of interests to overflow a PIT with any >>reasonable Interest lifetime. >> >> - As I see it, options are (a) set a short default as a first line >>defense and leave it at that for now, or (b) set a longer default but >>have a more sophisticated mechanism to purge PIT state when we hit the >>limit. >> >> - Along with Jeff and Giovanni, I do like option (b) with an operator >>configurable default. I also like Jeff's suggestion to pick a point >>value to mean "as long as you can". Local policies always prevail, of >>course, and these interests might be the first to be purged in case of >>overload. >> >> - I would like the default be the common case, which I think means the >>shorter value. >> >> - So my question to Alex is, do you think it is feasible to have some >>resource exhaustion defense mechanism in this version of NFD beyond a >>short Interest lifetime, and being mindful of the implementation effort >>at this stage, what would you suggest? >> >> Christos. >> >> >> On 03/22/2014 07:52 PM, Alex Afanasyev wrote: >>> This case seem to be slightly different that a general case with >>>Interests. I suspect, the intention is not send out Interests, but >>>just keep PIT state locally alive, as long as the application alive. >>>(Am I missing something here and this state is beyond just local >>>app-local daemon?) For this case, I suspect, we can do something >>>special. >>> >>> -- >>> Alex >>> >>> On Mar 22, 2014, at 6:48 PM, Giovanni Pau wrote: >>> >>>> >>>> In principle i agree for a default value of any time 32 sec are a bit >>>>short but if there is consensus is fine. In the vehicular case i would >>>>like to be able to issue an interest for something like >>>>/hazards/my-location/ahead of me/ and let it alive for at least >>>>several minutes, radio resources are scarce and renewing this every 30 >>>>sec appears to me a bit too much as you always want to know if there >>>>are hazards, or accidents.. >>>> >>>> That said, i agree for a reasonable default, >>>> >>>> >>>> /g. >>>> ========================== >>>> It had long since come to my attention that people of accomplishment >>>>rarely sat back and let things happen to them. They went out and >>>>happened to things. >>>> >>>> - Leonardo da Vinci >>>> ========================== >>>> >>>> >>>> >>>> >>>> On Mar 22, 2014, at 6:33 PM, Alex Afanasyev >>>> wrote: >>>> >>>>> I agree, but want to bring out the problem of soft state again. Why >>>>>would routers want to keep PIT state for a prolonged period of time >>>>>if it has no idea if the downstream still has interest in the data. >>>>> >>>>> As for (1). Data type used for lifetime allows expressing time >>>>>durations from 1 milliseconds to billion years and more (it is 2^64 >>>>>milliseconds), so it is really not a "practical" bound. >>>>> >>>>> --- >>>>> Alex >>>>> >>>>> On Mar 22, 2014, at 6:25 PM, Giovanni Pau wrote: >>>>> >>>>>> Hi All, >>>>>> >>>>>> I agree in full with Jeff arguments below that actually summarize >>>>>>my points even better than what i did myself. At this moment we know >>>>>>to little and bounding ourselves is not a good idea. Also in some >>>>>>environment may be smart to have relatively long time interest and >>>>>>have a smart Pit cleanup strategy. >>>>>> >>>>>> thanks >>>>>> Giovanni. >>>>>> >>>>>> >>>>>> >>>>>> On Mar 22, 2014, at 10:47 AM, Burke, Jeff >>>>>>wrote: >>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I agree with Alex - I am not suggesting either infinite lifetimes >>>>>>>or hard state. >>>>>>> >>>>>>> What I am suggesting is: >>>>>>> >>>>>>> 1) We don't know enough to set a practical bound on interest >>>>>>>lifetime, so let's leave the bound set to limit of the data type >>>>>>>used for storing it (or one less than that), until we have some >>>>>>>operational experience with what a reasonable value would be. >>>>>>>If it is to be operator-configurable anyway, this could be set in a >>>>>>>configuration file that is easy to modify in future distributions >>>>>>>or for particular installations. >>>>>>> >>>>>>> 2) Reserve the maximum int/long value to correspond to "as long as >>>>>>>the forwarder is willing to hang on to the interest" - this is not >>>>>>>infinite lifetime but does leave the possibility for long-lived >>>>>>>interest support in certain deployments, or with >>>>>>>deployment-specific PIT cleanup strategies. >>>>>>> >>>>>>> I realize there is a practical concern but this seems related to >>>>>>>the current state of the implementation - If the worry with these >>>>>>>is related to not yet having a PIT cleanup mechanism, I would >>>>>>>suggest that this is a basic feature that should be incorporated >>>>>>>into the initial NFD release. (Same with content store and FIB >>>>>>>limits / cleanup.) Even at this stage, busted or intentionally >>>>>>>aggressive apps should not be able to crash the forwarder by >>>>>>>filling these tables - this could happen even with relatively low >>>>>>>maximum lifetimes. Cleanup info also needs to be logged and >>>>>>>someday available through the instrumentation mechanisms; this has >>>>>>>already come up in Peter's debugging of ccnx?caused delays in >>>>>>>ndnrtc packet flows. >>>>>>> >>>>>>> Thanks, >>>>>>> Jeff >>>>>>> >>>>>>> >>>>>>> From: Alex Afanasyev >>>>>>> Date: Fri, 21 Mar 2014 09:57:56 -0700 >>>>>>> To: Jeff Burke >>>>>>> Cc: Junxiao Shi , Giovanni Pau >>>>>>>, "nfd-dev at lists.cs.ucla.edu" >>>>>>> >>>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> In my opinion, not having the limit and considering that one can >>>>>>>set infinite interest lifetime is not entirely correct. Isn't the >>>>>>>whole point of the Interest/Data exchange is to provide flow >>>>>>>balance? And when you separate Interest from Data by a significant >>>>>>>portion of time, I don't see how it works out to the flow balance. >>>>>>> >>>>>>> Interests are generally soft state. The client issues Interest >>>>>>>and expects data. Within reasonable time interval, routers expect >>>>>>>that the client is still down there and the network topology didn't >>>>>>>change, so the response would reach. When we are getting to >>>>>>>"unlimited" lifetimes, we are going towards "hard" state on >>>>>>>routers, without any guarantee that the client is still alive or >>>>>>>that the network hasn't changed. >>>>>>> >>>>>>> As I remember, Van was always saying that Interest should be a >>>>>>>bilateral agreement between two neighbors. If downstream is still >>>>>>>interested, it should re-express its interests. This by definition >>>>>>>assumes finite lifetimes (either global maximum or neighbors >>>>>>>explicitly communicate their maximum). >>>>>>> >>>>>>> In any case. The main reason I asked this question is that I have >>>>>>>a desire to provide a basic protection against abuse of the NDN >>>>>>>routers, at least in the testbed environment. If we don't do it in >>>>>>>a reasonable way, anybody can send a bunch of interests with huge >>>>>>>lifetimes and then just go home, while network will suffer until it >>>>>>>is rebooted (or we have a reasonable mechanism PIT cleanup >>>>>>>implemented). >>>>>>> >>>>>>> --- >>>>>>> Alex >>>>>>> >>>>>>> >>>>>>> On Mar 21, 2014, at 7:29 AM, Burke, Jeff >>>>>>>wrote: >>>>>>> >>>>>>>> >>>>>>>> If it is going to be operator configurable, perhaps we can leave >>>>>>>>the practical limit set to the theoretical limit for research >>>>>>>>versions of NFD? I don't think we have enough experience with the >>>>>>>>tradeoffs you describe to pick an upper bound at this time. >>>>>>>> >>>>>>>> In my understanding, there is no real cost to the forwarder for >>>>>>>>long-lifetime interests, because it will need a mechanism to drop >>>>>>>>old pending interests when the PIT is full anyway. Unless >>>>>>>>forwarder PIT behavior is controlled in some special way by >>>>>>>>specific operators on behalf of local applications (as it might be >>>>>>>>in Giovanni's vehicular apps), the burden is always on the >>>>>>>>application to refresh the interest with an update period that is >>>>>>>>no longer than the maximum tolerable delay for a response to the >>>>>>>>Interest, because there are no guarantees on what stays in the PIT. >>>>>>>> >>>>>>>> Further, are we sure that long-lived interests might not be >>>>>>>>common in some applications? For example, if an application >>>>>>>>checks for automatic 'software updates' by issuing an interest >>>>>>>>every hour, what does that application have to lose by setting the >>>>>>>>Interest lifetime to one hour, even if it is not guaranteed to >>>>>>>>persist? >>>>>>>> >>>>>>>> Jeff >>>>>>>> >>>>>>>> From: Junxiao Shi >>>>>>>> Date: Thu, 20 Mar 2014 22:17:37 -0700 >>>>>>>> To: Giovanni Pau >>>>>>>> Cc: Jeff Burke , >>>>>>>>, Lixia Zhang >>>>>>>> Subject: Re: [Nfd-dev] Interest lifetime limit >>>>>>>> >>>>>>>> 20140114 meeting discussed InterestLifetime upper bound. Van's >>>>>>>>idea is: >>>>>>>> * Protocol should not set an upper bound on InterestLifetime. >>>>>>>> * Setting a *practical* upper bound is a policy issue, >>>>>>>>configurable by operator. >>>>>>>> My proposal of using 32768ms as the default upper bound is >>>>>>>>completely unrelated to "2 octets". Its reason is: >>>>>>>> * Most applications don't need any special lifetime. >>>>>>>> * NFD Notification mechanism is more efficient if longer lifetime >>>>>>>>can be used. Push applications also desire a long lifetime. >>>>>>>> * Forwarder cannot afford long lifetime because PIT entries >>>>>>>>consume memory. >>>>>>>> * Trade-off between the need of push application and the >>>>>>>>forwarder state cost leads to 32768ms default upper bound. >>>>>>>> Yours, Junxiao >>>>>>>> On Mar 20, 2014 9:52 PM, "Giovanni Pau" wrote: >>>>>>>>> >>>>>>>>> Junxiao, >>>>>>>>> >>>>>>>>> sorry i can?t get it, if we have 64bits, then why we need to >>>>>>>>>bound it to a 16bit value? I agree is better to measure in ms >>>>>>>>>rather than sec, but yet i do not understand the need to bound. I >>>>>>>>>agree with jeff on the long timed interests, in our case such as >>>>>>>>>an interest for a road-hazard in the direction of traveling. >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> g. >>>>>>>> _______________________________________________ >>>>>>>> Nfd-dev mailing list >>>>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > >_______________________________________________ >Nfd-dev mailing list >Nfd-dev at lists.cs.ucla.edu >http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From jburke at remap.ucla.edu Tue Mar 25 14:52:59 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Tue, 25 Mar 2014 21:52:59 +0000 Subject: [Nfd-dev] Name component format In-Reply-To: <2B420430-013C-409D-BDA7-A240A7550D60@cs.ucla.edu> Message-ID: Hi, Comments below. Thanks, Jeff From: Lixia Zhang > Date: Sat, 22 Mar 2014 13:13:49 -0700 To: Jeff Burke > Cc: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: Name component format On Mar 22, 2014, at 10:58 AM, "Burke, Jeff" > wrote: Hi, There are a few changes to the representation of names in the TLV spec (http://named-data.net/doc/NDN-TLV/0.2/name.html) that I am not sure have been widely discussed. In particular, the introduction of types (beyond distinguishing the implicit digest), an updated URI representation, and the inability to specify empty name components. Are these considered "baked"? Would it be possible to discuss these at some point in more detail? Hi Jeff, the changes were made after some discussions among the NFD team, then with Van. Not sure what you meant by "considered baked". . . - I do not think the changes made to the NFD release-1. - we are doing explorative research, right? [jb] Sorry, I just meant whether they were locked and incorporated into NFD release 1. Of course all naming issues can benefit from more discussions. - wonder if you would like to propose a specific time frame (i.e. next week, or longer term)? - it would be helpful if there are some inputs/reading/considerations over email before the call, so that people can think through first. [jb] I don't know that it is urgent ? I know that the NFD people have a lot going on. :) Perhaps the next meeting with IRL we can talk about it first, or on the NFD call on 4/4. The main question I have is about the introduction of the number type? What motivates it? Doesn't this start a slippery slope away from opaque names. Among other things, typing components unless required by the protocol (as seems to be the case with the implicit hash) seems to run counter to the notion of name opaqueness, and there are some conflicts in the URI representation that need to be resolved. For any URI issues: Please let Alex and Junxiao know. [jb] JeffT had mentioned some concerns with the conflict with the allowable hex encoding... I'll ask him to talk with Alex and Junxiao. for component typing: are you saying that we should allow name component typing? [jb] No, I don't think so. There might be some value to applications, but I found the notion of name opaqueness to be very powerful so am wondering about the motivation. (I understand it for the implicit hash.) In any case, as soon as we can collect a list of technical questions, I can try scheduling a discussion. Lixia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Tue Mar 25 14:53:50 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Tue, 25 Mar 2014 21:53:50 +0000 Subject: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS In-Reply-To: <60339CCA-9D28-463F-B2B6-3881C59182F4@cisco.com> Message-ID: Please see below. (Quick related question about the repo: is the current repo that Shuo is working on integrated into this two week NFD test? Would it be possible to hear some observations on how it is performing relative to ndnrtc data rates discussed in a previous email?) From: "Dave Oran (oran)" > Date: Sun, 23 Mar 2014 20:05:04 +0000 To: Junxiao Shi > Cc: "ndn-app at lists.cs.ucla.edu" >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] [Ndn-app] NDN-RTC poke Data to CS While pushing data out into a cache isn?t necessarily dangerous if the cache is *truly* local, I am frankly quite nervous about this as a precedent. Why? Because ?local? is an incredibly slippery concept which ought to be completely specified before going down this path. At a minimum, I?d suggest that ?local? not mean ?the face is on the same box as the app?, but that either: a) the cache and the app are in exactly the same security container or b) the cache and app are mutually authenticated by means outside of NDN, and that the cache is protected by authorization machinery against pollution or poisoning by an application. [jb] Yes, we've already started to see that this notion of "local" can get a little messy in the context of the browser support we've been working with, which uses a remote websockets proxy to talk to an ndnd. The proxy is "local" to the daemon but the app is not really... this is probably an artifact of the current design that would go away, but it came up pretty quickly so it will probably do so again. (For example, it's unclear whether each browser tab's security container will include the local host or just the remote.) I?ll also note that if the goal here is to protect against an app that produces data and then the box it?s running on crashes or gets partitioned form the network, the approach of local faces won?t do the trick. It may however permit data to survive an application crash or exit. Clearly the alternative of having a fast and robust repo is superior to opening up the can of worms above. As a general comment, I?m detecting a bit more expediency here than I?m comfortable with. [jb] Yes; a fast repo seems the best solution for our use cases so far. However, if we really need to seed caches, let me suggest we step back and try to design an alternative approach. If the goal is simply to ensure data is available in advance of interests arriving to either reduce delay or provide robustness against app crash or box crash or network partition (I suspect the vehicular guys might be the ones interested in partition) there are alternatives that might work a whole lot better. One that occurs to me on just a few minutes thought is to issue an interest for the data at the same time you publish the data, and have machinery to route the interest explicitly to the cache you want to fill. I?ll point out that explicit interest routing might have other uses as well. [jb] Junxiao has pointed out this approach as well and it should also work in the short term. We wish for some guarantees about persistence of the data in the content store, though, that don't come along with it unless there are other hooks provided. Just thinking out loud here. On Mar 22, 2014, at 9:40 PM, Junxiao Shi > wrote: Dear folks The design is: 1. Incoming Data pipeline decides whether a Data is unsolicited or not. 2. A function decides whether an unsolicited Data CAN be admitted. * Currently, this function returns true if incoming face is local. * Replacing this function allows for alternate behaviors. * This function SHOULD NOT be part of CS policy (which governs the replacement of cached Data). It is a policy in forwarding, not in CS. 3. If allowed by the function in step 2, the Data is given to CS. CS policy decides whether to admit the Data based on available space, and also decides which Data to evict in case CS is full. Yours, Junxiao _______________________________________________ Ndn-app mailing list Ndn-app at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-app _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Mar 27 13:51:54 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 27 Mar 2014 13:51:54 -0700 Subject: [Nfd-dev] trailing whitespace Message-ID: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> Hi guys, It isn't a strict requirement, but as a suggestion, the trailing whitespace is always looks odd in source files (and is marked red on gerrit). I do not want to introduce any intrusive options as it could be dangerious, but there is a simple way git itself can warn you/force to remove it before creating commits. Just copy default pre-comit hook .git/hooks/pre-commit.sample hook as .git/hooks/pre-commit and it should work: cp .git/hooks/pre-commit.sample .git/hooks/pre-commit The behavior can be additionally adjusted, but default configuration is good enough to prevent creation of commits with trailing whitespace. --- Alex From shijunxiao at email.arizona.edu Thu Mar 27 13:57:36 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 27 Mar 2014 13:57:36 -0700 Subject: [Nfd-dev] trailing whitespace In-Reply-To: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> References: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> Message-ID: Hi Alex This rule cannot apply to every file: trailing whitespace is significant in Markdown. http://daringfireball.net/projects/markdown/syntax#p When you do want to insert a
break tag using Markdown, you end a line with two or more spaces, then type return. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidepesa at gmail.com Thu Mar 27 14:10:01 2014 From: davidepesa at gmail.com (Davide Pesavento) Date: Thu, 27 Mar 2014 22:10:01 +0100 Subject: [Nfd-dev] trailing whitespace In-Reply-To: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> References: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> Message-ID: On Thu, Mar 27, 2014 at 9:51 PM, Alex Afanasyev wrote: > Hi guys, > > It isn't a strict requirement, but as a suggestion, the trailing whitespace is always looks odd in source files (and is marked red on gerrit). I do not want to introduce any intrusive options as it could be dangerious, but there is a simple way git itself can warn you/force to remove it before creating commits. +1 In fact, I'd go one step further and add a code style rule that forbids trailing whitespace in .cpp/.hpp source files. Most IDEs and source code editors can automatically remove trailing whitespace upon save. Thanks, Davide From alexander.afanasyev at ucla.edu Thu Mar 27 14:27:21 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 27 Mar 2014 14:27:21 -0700 Subject: [Nfd-dev] trailing whitespace In-Reply-To: References: <5BD96345-5FDF-44A3-8D63-CB3EF41C5DA0@ucla.edu> Message-ID: Hi Junxiao, In this case, you would want to disable hooks, for example, using --no-verify commit options git commit --no-verify (or git commit -n) --- Alex On Mar 27, 2014, at 1:57 PM, Junxiao Shi wrote: > Hi Alex > > This rule cannot apply to every file: trailing whitespace is significant in Markdown. > > http://daringfireball.net/projects/markdown/syntax#p > When you do want to insert a
break tag using Markdown, you end a line with two or more spaces, then type return. > > Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sun Mar 30 11:58:46 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 30 Mar 2014 11:58:46 -0700 Subject: [Nfd-dev] Understanding nfd's trust model In-Reply-To: References: Message-ID: Hi Tai-Lin NFD trust model is *limited to management*, and does not apply to packets being forwarded. NFD Management trust model is very simple: any and all keys to be trusted must be statically configured in the configuration file. NFD Management is unable to support any "trust chain", because: - Trust chain requires retrieving public keys over the network. To retrieve keys over the network, a correct FIB is needed. - ControlCommands are used to setup the FIB. - To validate the ControlCommands, NFD needs the keys in the trust chain. This is a circular dependency. To break this circle, we decide to require statically configured keys. It's not a big limitation for NFD Management to require statically configured keys, because NFD Management can be used from localhost only, and very limited entities can interact with NFD Management via ControlCommand: - configuration tools: nfdc - control plane: NRD Although ndn-cpp-dev has a ndn::nfd::Controller class that allows apps to interact with NFD Management via ControlCommand, this class is intended to be used by nfdc only. Regular apps register prefixes with NRD. NRD can and should support more flexible trust model, including but not limited to a trust chain. Yours, Junxiao On Sun, Mar 30, 2014 at 11:44 AM, Tai-Lin Chu wrote: > hi, > I discovered [this doc]( > http://irl.cs.ucla.edu/~yingdi/web/pub/Trust-Management-Library-v4.pdf) > > questions: > 1. does nfd treat the default.ndncert as root key? > > 2. If so, does it mean that the keylocator's name of this cert is not used? > > 3. If I want to express the signing chain, do I simply add required key to > that cert dir and nfd will verify the chain? > > e.g. > simple: default.cert --(sign)-> data > chain: default.cert --(sign)-> other.cert --(sign)-> data > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sun Mar 30 14:34:20 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 30 Mar 2014 14:34:20 -0700 Subject: [Nfd-dev] Understanding nfd's trust model In-Reply-To: References: Message-ID: Hi Tai-Lin NFD Management ControlCommand uses signed Command Interests. NFD Management doesn't accept any Data, so Data signature verification is irrelevant. Yours, Junxiao On Mar 30, 2014 2:28 PM, "Tai-Lin Chu" wrote: > > But what should be in the data packet's keylocator then? > What I found in certificate (a data packet) are: > 1. data name (dsk) > 2. certificate subject name (ksk) > 3. rsa publickey bits (of dsk? or ksk?) > (some ignored for this discussion) > > How does nfd check the data packet's signature? > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sun Mar 30 19:18:11 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 30 Mar 2014 19:18:11 -0700 Subject: [Nfd-dev] NFD CodeStyle rule 76 Message-ID: Hi Alex Amended rule 76 gives an example of a switch statement: switch (condition) { case ABC: statements; // Fallthrough case DEF: statements; break; case XYZ: statements; break; default: statements; break; } Rule 76 also states: Note that each case keyword is indented relative to the switch statement as a whole. This note is not eliminated by the amendment. Amended rule 68 permits an additional block layout: while (!done) { doSomething(); done = moreToDo(); } There is a conflict when two amendments are applied together. The note in rule 76 causes case keywords to indent *relative to* the switchkeyword. The example given indicates that the relative offset is 2 spaces. However, when the additional block layout permitted by amended rule 68 is used, case keywords would end up at the same column with { } brackets. switch (condition) { case ABC: statements; break; default: statements; break; } Is this the intended layout, or should case keywords be indented 4 spaces after switch keyword (2 spaces after { } brackets)? switch (condition) { case ABC: statements; break; default: statements; break; } Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: