From jdd at wustl.edu Fri Apr 1 08:38:12 2016 From: jdd at wustl.edu (Dehart, John) Date: Fri, 1 Apr 2016 15:38:12 +0000 Subject: [Nfd-dev] Help with Interests and Nonces Message-ID: <8BA0B055-7670-4D36-A977-E6E669109551@wustl.edu> All: I am trying to figure out something that is happening on the NDN Testbed and need some help. While adding a new node, I misconfigured part of the NLSR keys configuration for the node. In tracking that down I noticed that there are some interests that are never satisfied but they also never go away. Here is an example from the output of ndndump that I am logging to a file: ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | head -1 1459523487.472518 From: 133.1.17.51, To: 210.114.89.49, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | tail -1 1459524558.534421 From: 210.114.89.49, To: 114.247.165.44, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | wc -l 1089 ndnops at octopus-ProLiant-DL380p-Gen8:~$ The way I read that is that in my log file I have a first occurrence of an interest for this key at time: 1459523487.472518 And a last occurrence at time: 1459524558.534421 Those times are 1071 seconds (over 15 minutes) apart. And there have been 1089 instances of that Nonce coming and going from the node I am monitoring. (And they are continuing. checking now, there are 1471 instances ?) Does that mean that these interests are bouncing around the world and never timing out? Or perhaps more correctly stated, are these interests traveling around the world and by the time they make a cycle back to a node, the PIT entry has timed out and that node re-broadcasts it? I have shut down nfd and NLSR on the misconfigured node and I have removed the node as a neighbor from all NLSR configuration files and restarted nfd and NLSR on all of its neighbors. So, I don?t think any node should be generating new interests for this key. Thanks for any insight on this. John From shijunxiao at email.arizona.edu Fri Apr 1 13:03:17 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 1 Apr 2016 13:03:17 -0700 Subject: [Nfd-dev] Help with Interests and Nonces In-Reply-To: <8BA0B055-7670-4D36-A977-E6E669109551@wustl.edu> References: <8BA0B055-7670-4D36-A977-E6E669109551@wustl.edu> Message-ID: <04887D92-C341-4DD6-8FD2-359C1C3E7A49@email.arizona.edu> Hi John I believe it?s still #1953, but at a large timescale. Per ndnmap, SRRU node has 5 peers. With multicast strategy used on /ndn/broadcast namespace, an incoming Interest from one of these peers will be forwarded to 4 other peers. You are observing 1089 incoming and outgoing Interests. Divide that by 5 (1 incoming + 4 outgoing), it?s roughly 218 incoming Interests. They are spread over a duration 1071 seconds. Divide that by 218 incoming Interests, and we can see the interval between two incoming Interests is 4.9 seconds on average. The solution for #1953 is the Dead Nonce List, which is designed to prevent these persistent loops. A persistent loop can be detected if the Nonce is still in DNL when the Interest loops back; it cannot be detected if the Interest loops back after Nonce lifetime. The expected lifetime is 6 seconds, but it?s probablistically adjusted, so the actual lifetime can possibly fall to less than 4.9 seconds, and therefore a persistent loop at 4.9 seconds interval cannot be detected. If this ever happens again, you can try to find out the entire loop by following the source IP field in ndndump trace, and then run ndndump at that peer. After the entire loop is found, you can adjust NFD log level to DEBUG for Forwarder component (send SIGHUP to nfd process, don?t restart NFD), and the NFD logs can give more information on what?s happening. Yours, Junxiao > On Apr 1, 2016, at 8:38 AM, Dehart, John wrote: > > > All: > > I am trying to figure out something that is happening on the NDN Testbed and need some help. > > While adding a new node, I misconfigured part of the NLSR keys configuration for the node. In > tracking that down I noticed that there are some interests that are never satisfied but > they also never go away. Here is an example from the output of ndndump that I am logging to a file: > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | head -1 > 1459523487.472518 From: 133.1.17.51, To: 210.114.89.49, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | tail -1 > 1459524558.534421 From: 210.114.89.49, To: 114.247.165.44, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | wc -l > 1089 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ > > The way I read that is that in my log file I have a first occurrence of an interest for this key at time: > 1459523487.472518 > > And a last occurrence at time: > 1459524558.534421 > > Those times are 1071 seconds (over 15 minutes) apart. And there have been 1089 instances of that Nonce > coming and going from the node I am monitoring. (And they are continuing. checking now, there are 1471 instances ?) > > Does that mean that these interests are bouncing around the world and never timing out? Or perhaps more correctly stated, > are these interests traveling around the world and by the time they make a cycle back to a node, the PIT entry has timed out > and that node re-broadcasts it? > > I have shut down nfd and NLSR on the misconfigured node and I have removed the node as a neighbor from all NLSR configuration files > and restarted nfd and NLSR on all of its neighbors. So, I don?t think any node should be generating new interests for this key. > > Thanks for any insight on this. > > John From Ignacio.Solis at parc.com Fri Apr 1 16:21:13 2016 From: Ignacio.Solis at parc.com (Ignacio.Solis at parc.com) Date: Fri, 1 Apr 2016 23:21:13 +0000 Subject: [Nfd-dev] Help with Interests and Nonces In-Reply-To: <04887D92-C341-4DD6-8FD2-359C1C3E7A49@email.arizona.edu> References: <8BA0B055-7670-4D36-A977-E6E669109551@wustl.edu>, <04887D92-C341-4DD6-8FD2-359C1C3E7A49@email.arizona.edu> Message-ID: <2f7ad305-426e-46e1-b220-6e88cbb959a4@parc.com> At some point you're going to reconsider having some form of loop halting in the packets. Maybe a hop limit like ccn? Hopefully this will be sooner rather than later. Nacho (Ignacio) Solis Principal Scientist Protocol Architect Palo Alto Research Center From: Junxiao Shi Sent: Apr 1, 2016 3:03 PM To: Dehart, John Cc: Subject: Re: [Nfd-dev] Help with Interests and Nonces Hi John I believe it?s still #1953, but at a large timescale. Per ndnmap, SRRU node has 5 peers. With multicast strategy used on /ndn/broadcast namespace, an incoming Interest from one of these peers will be forwarded to 4 other peers. You are observing 1089 incoming and outgoing Interests. Divide that by 5 (1 incoming + 4 outgoing), it?s roughly 218 incoming Interests. They are spread over a duration 1071 seconds. Divide that by 218 incoming Interests, and we can see the interval between two incoming Interests is 4.9 seconds on average. The solution for #1953 is the Dead Nonce List, which is designed to prevent these persistent loops. A persistent loop can be detected if the Nonce is still in DNL when the Interest loops back; it cannot be detected if the Interest loops back after Nonce lifetime. The expected lifetime is 6 seconds, but it?s probablistically adjusted, so the actual lifetime can possibly fall to less than 4.9 seconds, and therefore a persistent loop at 4.9 seconds interval cannot be detected. If this ever happens again, you can try to find out the entire loop by following the source IP field in ndndump trace, and then run ndndump at that peer. After the entire loop is found, you can adjust NFD log level to DEBUG for Forwarder component (send SIGHUP to nfd process, don?t restart NFD), and the NFD logs can give more information on what?s happening. Yours, Junxiao > On Apr 1, 2016, at 8:38 AM, Dehart, John wrote: > > > All: > > I am trying to figure out something that is happening on the NDN Testbed and need some help. > > While adding a new node, I misconfigured part of the NLSR keys configuration for the node. In > tracking that down I noticed that there are some interests that are never satisfied but > they also never go away. Here is an example from the output of ndndump that I am logging to a file: > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | head -1 > 1459523487.472518 From: 133.1.17.51, To: 210.114.89.49, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | tail -1 > 1459524558.534421 From: 210.114.89.49, To: 114.247.165.44, Tunnel Type: UDP, INTEREST: /ndn/broadcast/KEYS/ndn/th/ac/srru/KEY/ksk-1458752543332/ID-CERT?ndn.Nonce=3880653420 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ grep 3880653420 ndndump.srru.log | wc -l > 1089 > > ndnops at octopus-ProLiant-DL380p-Gen8:~$ > > The way I read that is that in my log file I have a first occurrence of an interest for this key at time: > 1459523487.472518 > > And a last occurrence at time: > 1459524558.534421 > > Those times are 1071 seconds (over 15 minutes) apart. And there have been 1089 instances of that Nonce > coming and going from the node I am monitoring. (And they are continuing. checking now, there are 1471 instances ?) > > Does that mean that these interests are bouncing around the world and never timing out? Or perhaps more correctly stated, > are these interests traveling around the world and by the time they make a cycle back to a node, the PIT entry has timed out > and that node re-broadcasts it? > > I have shut down nfd and NLSR on the misconfigured node and I have removed the node as a neighbor from all NLSR configuration files > and restarted nfd and NLSR on all of its neighbors. So, I don?t think any node should be generating new interests for this key. > > Thanks for any insight on this. > > John _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Navdeep.Uniyal at neclab.eu Mon Apr 4 01:57:53 2016 From: Navdeep.Uniyal at neclab.eu (Navdeep Uniyal) Date: Mon, 4 Apr 2016 08:57:53 +0000 Subject: [Nfd-dev] [Ndn-interest] Data not received from NDN traffic generator In-Reply-To: References: <15421E67B274CD4AB5F6AEA46A684C3708AB280C@PALLENE.office.hd> Message-ID: <15421E67B274CD4AB5F6AEA46A684C3708AB2B1A@PALLENE.office.hd> Hi Junxiao, I used digest signing to generate data packets using NDN traffic generator.Also, I have increased the InterestLifetime. Still I find all interests are not being satisfied. I am using CS store size to be zero so that all interests can reach the end server. And onInterestLoop is happening as I am testing the forwarding with 4 types of interests using ndn traffic client and sometime same interest is generated back to back. The issue remains the same as interests are not getting satisfied after certain time. Also, I found a few errors in the log file. I suspect the issue is due to the below errors but I have no clue about the error type: 1459757352.193370 ERROR: [EthernetTransport] [id=264,local=dev://sa-eth0,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459757352.213168 ERROR: [EthernetTransport] [id=263,local=dev://sa-eth1,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459757352.229184 ERROR: [EthernetTransport] [id=262,local=dev://sa-eth2,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down Attached is the recent log file. Please advise. Regards, Navdeep Uniyal From: Junxiao Shi [mailto:shijunxiao at email.arizona.edu] Sent: Mittwoch, 30. M?rz 2016 19:31 To: Navdeep Uniyal Cc: ndn-interest at lists.cs.ucla.edu Subject: Re: [Ndn-interest] Data not received from NDN traffic generator Hi Navdeep I'm looking at the last few lines of the log file. It appears that there's some loop going on in the network. It's particularly strange that face 262 receives an Interest with duplicate Nonce within 0.2ms of sending it. InterestLifetime is set to 200ms, but the producer spends 500ms to generate Data. A longer InterestLifetime is needed; the consumer needs to send Interests at a slower rate; the producer needs to generate Data faster (eg. set digest signing instead of RSA). Also, you mentioned there are 2500 Interests, but I only see a small number of distinct names. 1459335670.181340 DEBUG: [Forwarder] onIncomingInterest face=258 interest=/ndn/nle/file3 1459335670.181630 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file3 1459335670.181672 DEBUG: [Forwarder] onOutgoingInterest face=262 interest=/ndn/nle/file3 1459335670.181898 DEBUG: [Forwarder] onIncomingInterest face=262 interest=/ndn/nle/file3 1459335670.182178 DEBUG: [Forwarder] onInterestLoop face=262 interest=/ndn/nle/file3 1459335670.201832 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file5 1459335670.201911 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file5 unsatisfied 1459335670.231633 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file5 1459335670.231972 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file5 1459335670.232011 DEBUG: [Forwarder] onOutgoingInterest face=263 interest=/ndn/nle/file5 1459335670.271813 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file1 1459335670.271889 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file1 unsatisfied 1459335670.281172 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file1 1459335670.281512 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file1 1459335670.281551 DEBUG: [Forwarder] onOutgoingInterest face=263 interest=/ndn/nle/file1 1459335670.381690 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file7 1459335670.382148 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file7 1459335670.382193 DEBUG: [Forwarder] onOutgoingData face=259 data=/ndn/nle/file7 1459335670.382362 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file3 1459335670.382447 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file3 unsatisfied 1459335670.432216 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file5 1459335670.432315 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file5 unsatisfied 1459335670.481765 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file1 1459335670.481840 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file1 unsatisfied 1459335670.482383 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file7 satisfied 1459335685.781128 DEBUG: [Forwarder] onIncomingData face=263 data=/ndn/nle/file3 1459335685.781423 DEBUG: [Forwarder] onDataUnsolicited face=263 data=/ndn/nle/file3 cached 1459335685.782137 DEBUG: [Forwarder] onIncomingData face=263 data=/ndn/nle/file1 1459335685.782378 DEBUG: [Forwarder] onDataUnsolicited face=263 data=/ndn/nle/file1 cached Yours, Junxiao On Wed, Mar 30, 2016 at 5:42 AM, Navdeep Uniyal > wrote: Hi All, I am facing an issue. In my setup I am using a simple topology with one client, one server and with 3 different paths having one intermediate host on each path. All the generated interests are not getting satisfied while using best-route strategy. As per logs I can see, after satisfying a few interests, the server(NDN traffic generator) is not responding to the further requests and only the ones at CS store are getting satisfied. I am unable to find the reason for such a behavior. Although the number of interests generated are 2500 and there is no limit on max number of interests the traffic generator can satisfy ( although I also tried with max limit of 20000 interests). Attached are the NFD logs for the server generating data. Please, if someone can help in this regard. Regards, Navdeep Uniyal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: server Log.txt URL: From Navdeep.Uniyal at neclab.eu Tue Apr 5 00:35:29 2016 From: Navdeep.Uniyal at neclab.eu (Navdeep Uniyal) Date: Tue, 5 Apr 2016 07:35:29 +0000 Subject: [Nfd-dev] [Ndn-interest] Data not received from NDN traffic generator In-Reply-To: <15421E67B274CD4AB5F6AEA46A684C3708AB2B1A@PALLENE.office.hd> References: <15421E67B274CD4AB5F6AEA46A684C3708AB280C@PALLENE.office.hd> <15421E67B274CD4AB5F6AEA46A684C3708AB2B1A@PALLENE.office.hd> Message-ID: <15421E67B274CD4AB5F6AEA46A684C3708AB2BCB@PALLENE.office.hd> Hi Please if someone can help resolving the below mentioned issue. Regards, Navdeep Uniyal From: Ndn-interest [mailto:ndn-interest-bounces at lists.cs.ucla.edu] On Behalf Of Navdeep Uniyal Sent: Montag, 4. April 2016 10:58 To: Junxiao Shi Cc: nfd-dev at lists.cs.ucla.edu; ndn-interest at lists.cs.ucla.edu Subject: Re: [Ndn-interest] Data not received from NDN traffic generator Hi Junxiao, I used digest signing to generate data packets using NDN traffic generator.Also, I have increased the InterestLifetime. Still I find all interests are not being satisfied. I am using CS store size to be zero so that all interests can reach the end server. And onInterestLoop is happening as I am testing the forwarding with 4 types of interests using ndn traffic client and sometime same interest is generated back to back. The issue remains the same as interests are not getting satisfied after certain time. Also, I found a few errors in the log file. I suspect the issue is due to the below errors but I have no clue about the error type: 1459757352.193370 ERROR: [EthernetTransport] [id=264,local=dev://sa-eth0,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459757352.213168 ERROR: [EthernetTransport] [id=263,local=dev://sa-eth1,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459757352.229184 ERROR: [EthernetTransport] [id=262,local=dev://sa-eth2,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down Attached is the recent log file. Please advise. Regards, Navdeep Uniyal From: Junxiao Shi [mailto:shijunxiao at email.arizona.edu] Sent: Mittwoch, 30. M?rz 2016 19:31 To: Navdeep Uniyal Cc: ndn-interest at lists.cs.ucla.edu Subject: Re: [Ndn-interest] Data not received from NDN traffic generator Hi Navdeep I'm looking at the last few lines of the log file. It appears that there's some loop going on in the network. It's particularly strange that face 262 receives an Interest with duplicate Nonce within 0.2ms of sending it. InterestLifetime is set to 200ms, but the producer spends 500ms to generate Data. A longer InterestLifetime is needed; the consumer needs to send Interests at a slower rate; the producer needs to generate Data faster (eg. set digest signing instead of RSA). Also, you mentioned there are 2500 Interests, but I only see a small number of distinct names. 1459335670.181340 DEBUG: [Forwarder] onIncomingInterest face=258 interest=/ndn/nle/file3 1459335670.181630 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file3 1459335670.181672 DEBUG: [Forwarder] onOutgoingInterest face=262 interest=/ndn/nle/file3 1459335670.181898 DEBUG: [Forwarder] onIncomingInterest face=262 interest=/ndn/nle/file3 1459335670.182178 DEBUG: [Forwarder] onInterestLoop face=262 interest=/ndn/nle/file3 1459335670.201832 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file5 1459335670.201911 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file5 unsatisfied 1459335670.231633 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file5 1459335670.231972 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file5 1459335670.232011 DEBUG: [Forwarder] onOutgoingInterest face=263 interest=/ndn/nle/file5 1459335670.271813 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file1 1459335670.271889 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file1 unsatisfied 1459335670.281172 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file1 1459335670.281512 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file1 1459335670.281551 DEBUG: [Forwarder] onOutgoingInterest face=263 interest=/ndn/nle/file1 1459335670.381690 DEBUG: [Forwarder] onIncomingInterest face=259 interest=/ndn/nle/file7 1459335670.382148 DEBUG: [Forwarder] onContentStoreMiss interest=/ndn/nle/file7 1459335670.382193 DEBUG: [Forwarder] onOutgoingData face=259 data=/ndn/nle/file7 1459335670.382362 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file3 1459335670.382447 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file3 unsatisfied 1459335670.432216 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file5 1459335670.432315 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file5 unsatisfied 1459335670.481765 DEBUG: [Forwarder] onInterestUnsatisfied interest=/ndn/nle/file1 1459335670.481840 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file1 unsatisfied 1459335670.482383 DEBUG: [Forwarder] onInterestFinalize interest=/ndn/nle/file7 satisfied 1459335685.781128 DEBUG: [Forwarder] onIncomingData face=263 data=/ndn/nle/file3 1459335685.781423 DEBUG: [Forwarder] onDataUnsolicited face=263 data=/ndn/nle/file3 cached 1459335685.782137 DEBUG: [Forwarder] onIncomingData face=263 data=/ndn/nle/file1 1459335685.782378 DEBUG: [Forwarder] onDataUnsolicited face=263 data=/ndn/nle/file1 cached Yours, Junxiao On Wed, Mar 30, 2016 at 5:42 AM, Navdeep Uniyal > wrote: Hi All, I am facing an issue. In my setup I am using a simple topology with one client, one server and with 3 different paths having one intermediate host on each path. All the generated interests are not getting satisfied while using best-route strategy. As per logs I can see, after satisfying a few interests, the server(NDN traffic generator) is not responding to the further requests and only the ones at CS store are getting satisfied. I am unable to find the reason for such a behavior. Although the number of interests generated are 2500 and there is no limit on max number of interests the traffic generator can satisfy ( although I also tried with max limit of 20000 interests). Attached are the NFD logs for the server generating data. Please, if someone can help in this regard. Regards, Navdeep Uniyal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: server Log.txt URL: From nfd-call-notification at mail1.yoursunny.com Tue Apr 5 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Tue, 5 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160405 Message-ID: <201604051400.u35E02Tv001044@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From jdd at wustl.edu Tue Apr 5 09:07:30 2016 From: jdd at wustl.edu (Dehart, John) Date: Tue, 5 Apr 2016 16:07:30 +0000 Subject: [Nfd-dev] NFD call 20160405 In-Reply-To: <201604051400.u35E02Tv001044@lectura.cs.arizona.edu> References: <201604051400.u35E02Tv001044@lectura.cs.arizona.edu> Message-ID: <85981AD4-F3A3-4BE2-BDB6-4E1EFF43CB14@wustl.edu> On Apr 5, 2016, at 9:00 AM, NFD call notification > wrote: Dear folks This is a reminder of the upcoming NFD call using Bluejeans https://bluejeans.com/760263096. The current call time is every Tuesday/Thursday 13:00-14:00 Pacific Time. The current agenda includes the following issues: ________________________________ http://redmine.named-data.net/issues/3566 Adaptive Forwarding Strategy for hyperbolic routing - Handling NACKs Is today?s call really on the strategy and NACKs or is it on Hyperbolic routing in general? Thanks, John _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanwang at memphis.edu Tue Apr 5 09:33:20 2016 From: lanwang at memphis.edu (Lan Wang (lanwang)) Date: Tue, 5 Apr 2016 16:33:20 +0000 Subject: [Nfd-dev] NFD call 20160405 In-Reply-To: <85981AD4-F3A3-4BE2-BDB6-4E1EFF43CB14@wustl.edu> References: <201604051400.u35E02Tv001044@lectura.cs.arizona.edu> <85981AD4-F3A3-4BE2-BDB6-4E1EFF43CB14@wustl.edu> Message-ID: On Apr 5, 2016, at 11:07 AM, Dehart, John > wrote: On Apr 5, 2016, at 9:00 AM, NFD call notification > wrote: Dear folks This is a reminder of the upcoming NFD call using Bluejeans https://bluejeans.com/760263096. The current call time is every Tuesday/Thursday 13:00-14:00 Pacific Time. The current agenda includes the following issues: ________________________________ http://redmine.named-data.net/issues/3566 Adaptive Forwarding Strategy for hyperbolic routing - Handling NACKs Is today?s call really on the strategy and NACKs or is it on Hyperbolic routing in general? It is on hyperbolic routing in general. Vince: can you update the agenda? I don?t remember how to do it. Lan Thanks, John _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From vslehman at memphis.edu Tue Apr 5 10:44:10 2016 From: vslehman at memphis.edu (Vince Lehman (vslehman)) Date: Tue, 5 Apr 2016 17:44:10 +0000 Subject: [Nfd-dev] NFD call 20160405 In-Reply-To: References: <201604051400.u35E02Tv001044@lectura.cs.arizona.edu> <85981AD4-F3A3-4BE2-BDB6-4E1EFF43CB14@wustl.edu> Message-ID: <75C1BA7B-3168-4F43-BC13-F1C15E8FA677@memphis.edu> Sure, I?ve updated the agenda. -- Vince Lehman On Apr 5, 2016, at 11:33 AM, Lan Wang (lanwang) > wrote: On Apr 5, 2016, at 11:07 AM, Dehart, John > wrote: On Apr 5, 2016, at 9:00 AM, NFD call notification > wrote: Dear folks This is a reminder of the upcoming NFD call using Bluejeans https://bluejeans.com/760263096. The current call time is every Tuesday/Thursday 13:00-14:00 Pacific Time. The current agenda includes the following issues: ________________________________ http://redmine.named-data.net/issues/3566 Adaptive Forwarding Strategy for hyperbolic routing - Handling NACKs Is today?s call really on the strategy and NACKs or is it on Hyperbolic routing in general? It is on hyperbolic routing in general. Vince: can you update the agenda? I don?t remember how to do it. Lan Thanks, John _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From nfd-call-notification at mail1.yoursunny.com Thu Apr 7 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Thu, 7 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160407 Message-ID: <201604071400.u37E02xe021591@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From jdd at wustl.edu Thu Apr 7 08:46:45 2016 From: jdd at wustl.edu (Dehart, John) Date: Thu, 7 Apr 2016 15:46:45 +0000 Subject: [Nfd-dev] Dropping of Ubuntu 12.04 support Message-ID: All: Is there a schedule for exactly when support for Ubuntu 12.04 will be dropped? The reason I ask is that most of the NDN Testbed still runs 12.04. It will be a major effort to get all nodes upgraded and it will undoubtedly cause a lot of disruption. John From shijunxiao at email.arizona.edu Thu Apr 7 08:51:45 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 7 Apr 2016 08:51:45 -0700 Subject: [Nfd-dev] Dropping of Ubuntu 12.04 support In-Reply-To: References: Message-ID: Hi John Per platform policy , NDN projects are required to support the latest and previous Ubuntu LTS releases. As soon as Ubuntu 16.04 LTS is released, Ubuntu 12.04 would fall out of this range and become unsupported. Per Ubuntu 16.04 release schedule , the exact date is Apr 21, 2016. Nodes not yet upgraded to 14.04 or later can continue to run 0.4.1. Wire formats are compatible. Yours, Junxiao On Thu, Apr 7, 2016 at 8:46 AM, Dehart, John wrote: > > > > All: > > Is there a schedule for exactly when support for Ubuntu 12.04 will be > dropped? > The reason I ask is that most of the NDN Testbed still runs 12.04. > It will be a major effort to get all nodes upgraded and it will undoubtedly > cause a lot of disruption. > > John > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Fri Apr 8 15:58:14 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 8 Apr 2016 15:58:14 -0700 Subject: [Nfd-dev] Fwd: Code Coverage Percentage In-Reply-To: <57083662.5000903@email.arizona.edu> References: <57083662.5000903@email.arizona.edu> Message-ID: As decided in 20140114 NFD meeting, code coverage is supposed to be over 90%. But does anyone know the policy on "difference of code coverage"? ---------- Forwarded message ---------- From: "Eric Newberry" Date: Apr 8, 2016 15:53 Subject: Code Coverage Percentage To: "Junxiao Shi" Cc: >From your experience with code coverage in NDN, what would you consider to > be a significant difference in code coverage, percentage wise? 5%? 10%? I > wrote a script to output files where the coverage percentage differs by > more than a certain amount. > > Eric > > -- > Eric Newberry > > Computer Science Undergraduate > The University of Arizona > Vice President, University of Arizona ACM Student Chapter > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide.pesavento at lip6.fr Sat Apr 9 08:23:41 2016 From: davide.pesavento at lip6.fr (Davide Pesavento) Date: Sat, 9 Apr 2016 17:23:41 +0200 Subject: [Nfd-dev] Fwd: Code Coverage Percentage In-Reply-To: References: <57083662.5000903@email.arizona.edu> Message-ID: I'm not sure I understand the question (or maybe I'm missing some context)... difference from what to what? between one commit and the next one? Also, when you say "code coverage", what kind of coverage (classes/lines/conditionals/...) are you talking about? On Sat, Apr 9, 2016 at 12:58 AM, Junxiao Shi wrote: > As decided in 20140114 NFD meeting, code coverage is supposed to be over > 90%. > But does anyone know the policy on "difference of code coverage"? > > ---------- Forwarded message ---------- > From: "Eric Newberry" > Date: Apr 8, 2016 15:53 > Subject: Code Coverage Percentage > To: "Junxiao Shi" > Cc: > >> From your experience with code coverage in NDN, what would you consider to >> be a significant difference in code coverage, percentage wise? 5%? 10%? I >> wrote a script to output files where the coverage percentage differs by more >> than a certain amount. >> >> Eric >> >> -- >> Eric Newberry >> >> Computer Science Undergraduate >> The University of Arizona >> Vice President, University of Arizona ACM Student Chapter >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From shijunxiao at email.arizona.edu Sun Apr 10 18:21:31 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 10 Apr 2016 18:21:31 -0700 Subject: [Nfd-dev] review request: hasPendingOutRecords considers Nack Message-ID: Dear folks I have modified fw::hasPendingOutRecords function to consider Nack, and I need someone to review the code: http://gerrit.named-data.net/2812 If you are willing to have a look at the patch, I'll appreciate that. You don't have to be an expert in order to do a code review. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From lixia at CS.UCLA.EDU Mon Apr 11 07:52:09 2016 From: lixia at CS.UCLA.EDU (Lixia Zhang) Date: Mon, 11 Apr 2016 07:52:09 -0700 Subject: [Nfd-dev] Measurements Table In-Reply-To: References: Message-ID: <9E5441B4-98D4-4A60-94A2-725C40859B34@cs.ucla.edu> re-directing to the right mailing list > On Apr 11, 2016, at 7:26 AM, Pedro Figueiredo wrote: > > Hi everyone, > > Is there any way (on current NFD implementation) that one can list the Measurements Table entries using something as nfd-status? Or just accessing via code methods get/setEntry ? > > I also have questions about how it is used by the forwarding strategies. On NFD-dev guide it states that the strategies use the table to store arbitrary information (can be delay, jitter, RTT, etc.), but on http://named-data.net/doc/NFD/current/overview.html it says that the strategies stores past performance results. How "past" is that? How often does strategies update this table? > > I am guessing that what you would like to store varies from each forwarding strategy, but is there any common frequency of updates that the strategies use to input entries on the table (e.g. every successful interest-data retrieval)? > > Thanks in advance, > > Pedro. > > -- > ------------------------------------------ > Pedro H. C. Figueiredo Soares > Computer Engineering - UFPA (ITEC/FCT) > Laboratory of Signal Processing - LaPS > pedro.figueiredo at itec.ufpa.br | p.h.c at ieee.org > ------------------------------------------- > _______________________________________________ > Ndn-interest mailing list > Ndn-interest at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-interest -------------- next part -------------- An HTML attachment was scrubbed... URL: From enewberry at email.arizona.edu Mon Apr 11 11:51:49 2016 From: enewberry at email.arizona.edu (Eric Newberry) Date: Mon, 11 Apr 2016 11:51:49 -0700 Subject: [Nfd-dev] Fwd: Code Coverage Percentage In-Reply-To: References: <57083662.5000903@email.arizona.edu> Message-ID: <570BF245.8000606@email.arizona.edu> The coverage tester we use tests lines and conditionals. I'm trying to move code coverage from running on Ubuntu 12.04 to 14.04, as 12.04 will not be a supported platform for NDN when 16.04 is released later this month. I've run coverage for identical commits on both platforms for ndn-atmos, ndn-cxx, ndns, and NFD. While some files differ significantly in coverage between the two platforms, the difference in the overall coverage percentages appears to be insignificant (+/- 1%) or even greater on 14.04. However, some files are missing from the coverage report on 14.04. The code coverage reports are available on Redmine (#3386). Eric -- Eric Newberry Computer Science Undergraduate The University of Arizona Vice President, University of Arizona ACM Student Chapter On 4/9/2016 8:23 AM, Davide Pesavento wrote: > I'm not sure I understand the question (or maybe I'm missing some > context)... difference from what to what? between one commit and the > next one? > > Also, when you say "code coverage", what kind of coverage > (classes/lines/conditionals/...) are you talking about? > > On Sat, Apr 9, 2016 at 12:58 AM, Junxiao Shi > wrote: >> As decided in 20140114 NFD meeting, code coverage is supposed to be over >> 90%. >> But does anyone know the policy on "difference of code coverage"? >> >> ---------- Forwarded message ---------- >> From: "Eric Newberry" >> Date: Apr 8, 2016 15:53 >> Subject: Code Coverage Percentage >> To: "Junxiao Shi" >> Cc: >> >>> From your experience with code coverage in NDN, what would you consider to >>> be a significant difference in code coverage, percentage wise? 5%? 10%? I >>> wrote a script to output files where the coverage percentage differs by more >>> than a certain amount. >>> >>> Eric >>> >>> -- >>> Eric Newberry >>> >>> Computer Science Undergraduate >>> The University of Arizona >>> Vice President, University of Arizona ACM Student Chapter >>> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From Navdeep.Uniyal at neclab.eu Tue Apr 12 03:28:59 2016 From: Navdeep.Uniyal at neclab.eu (Navdeep Uniyal) Date: Tue, 12 Apr 2016 10:28:59 +0000 Subject: [Nfd-dev] [Mini-NDN] Interfaces went down in minindn In-Reply-To: References: <15421E67B274CD4AB5F6AEA46A684C3708AB2D2C@PALLENE.office.hd> <15421E67B274CD4AB5F6AEA46A684C3708AB2DAE@PALLENE.office.hd> , <15421E67B274CD4AB5F6AEA46A684C3708AB2E4F@PALLENE.office.hd>, ,<15421E67B274CD4AB5F6AEA46A684C3708AB3037@PALLENE.office.hd> Message-ID: <15421E67B274CD4AB5F6AEA46A684C3708AB30DB@PALLENE.office.hd> Hi Ashlesh, Thank you for your help. I changed the experiment as you suggested and tested it again. 1. I found interests moving from the host to the server but data not coming in. 2. Yes the occurrence of error is after quitting minindn as you said and only after quitting miniNDN, requested data is coming in and is marked as unsolicited. I could not find the issue why data generation suddenly stops as ndn dump and nfd logs could not provide proper clue. I am attaching my configuration files, logs and dump files. Also, adding nfd-dev list to this mail if someone could help in this regard. Steps to redo the experiment: 1. Install experiment nletest.py on minindn using "install.sh -i" 2. Copy topology file nectopo.conf to /{path to minindn}/ndn_utils/topologies 3. Copy traffic generator server and client files "ndn-traffic-client.conf" and "ndn-traffic-server.conf" to /{path to ndn traffic generator}/ndn-traffic-generator/build. 4. Run sudo minindn --experiment=icnle nectopo.conf 5. On Mininet CLI> ga ndn-traffic -c 1000 -i 100 /home/mininet/mini-ndn/ndn-traffic-generator/build/ndn-traffic-client.conf Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Montag, 11. April 2016 22:08 To: Navdeep Uniyal; mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Some observations: 1) I think the "interface went down" error appears when you quit Mini-NDN and NFD is killed. 2) When I tried interval as 1ms (count: 1000) - no loss for the first 4 tries, on the 5th time I see the same problem. 3) I turned on ndndump and found that sa suddenly stops responding with data - not sure why. For some time, it seems, interest do reach sa without it responding. Can you modify your experiment as below and trace interest/data and see if you can find where it fails: def setup(self): for host in self.net.hosts: for intf in host.intfNames(): ndnDumpOutputFile = "dump.%s_%s" % (intf, str(host.intf(intf).IP())) host.cmd("sudo ndndump -f '.*nle.*' -i %s > %s &" % (intf, ndnDumpOutputFile)) if host.name == 'ca': Ashlesh ________________________________ From: Navdeep Uniyal > Sent: Monday, April 11, 2016 7:37:37 AM To: Ashlesh Gawande (agawande) Subject: RE: [Mini-NDN] Interfaces went down in minindn Hi Ashlesh, Please do let me know if there are some updates on the issue. Best Regards, Navdeep Uniyal From: Navdeep Uniyal Sent: Freitag, 8. April 2016 09:48 To: 'Ashlesh Gawande (agawande)' Cc: mini-ndn at lists.cs.ucla.edu Subject: RE: [Mini-NDN] Interfaces went down in minindn Thank You Ashlesh, The results are similar for me. Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Freitag, 8. April 2016 00:52 To: Navdeep Uniyal Cc: mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Okay I looked at your experiment and modified my ndn-traffic-client/server appropriately. Initially all the interest are answered correctly. Then you start to see some timeouts. Then all interests time out. My total interest loss was ~25%. I am looking into it further. Ashlesh ________________________________ From: Mini-NDN > on behalf of Ashlesh Gawande (agawande) > Sent: Thursday, April 7, 2016 5:12 PM To: Navdeep Uniyal Cc: mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Is this ndn-traffic-client.conf the default one? Ashlesh ________________________________ From: Navdeep Uniyal > Sent: Thursday, April 7, 2016 2:58 AM To: Ashlesh Gawande (agawande) Cc: mini-ndn at lists.cs.ucla.edu; Lan Wang (lanwang) Subject: RE: [Mini-NDN] Interfaces went down in minindn Hi Ashlesh, PFA the attached files I used for experiments. "nectopo.conf" is the topology file "nletest.py" is the experiment file sudo minindn --experiment=icnle nectopo.conf On Mininet CLI > ga ndn-traffic -c 1000 -i 10 /home/mininet/mini-ndn/ndn-traffic-generator/build/ndn-traffic-client.conf Best Regards, Navdeep Uniyal From: Lan Wang (lanwang) [mailto:lanwang at memphis.edu] Sent: Donnerstag, 7. April 2016 01:22 To: Navdeep Uniyal Cc: Ashlesh Gawande (agawande); mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Ashlesh, Maybe Navdeep can give you his setup files and you repeat his experiment? Lan On Apr 6, 2016, at 10:32 AM, Navdeep Uniyal > wrote: Hi Ashlesh, Thank you for the reply. The interest sending rate is 100/sec although I checked it with 10/sec. In both the cases I found similar results. Interesting is the data generation and arrival time. The data transfer behaves perfectly fine for initial few interests while after a certain time it just stops and the requested data then is arrives just after the interfaces goes down(I guess mostly when I am stopping the Mininet topology) and are marked as unsolicited data. I could not get this behavior if it is because of link issues, NFD issue or NDN Traffic generator issue. I tried asking the solution on the other nfd-dev mailing list but could not get any response. Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Mittwoch, 6. April 2016 16:40 To: Navdeep Uniyal; mini-ndn at lists.cs.ucla.edu Subject: Re: Interfaces went down in minindn What is the rate of sending interests? Can you try lowering it and see if interest drops are decreased? Ashlesh ________________________________ From: Mini-NDN > on behalf of Navdeep Uniyal > Sent: Wednesday, April 6, 2016 3:52 AM To: mini-ndn at lists.cs.ucla.edu Subject: [Mini-NDN] Interfaces went down in minindn Hi everyone, I have been running some experiments using minindn using NDN traffic generator for interest and data generation. I am facing a few issues as I am unable to get the data packets in response after a certain period of time. In my configuration I am making cs store size to be zero so that all interests can reach the producer. While investigating the issue of interest drops I found few errors in the nfd logs. Below are the few errors which I am getting in the log file: 1459932297.745268 ERROR: [EthernetTransport] [id=264,local=dev://sa-eth0,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459932297.761311 ERROR: [EthernetTransport] [id=263,local=dev://sa-eth1,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459932297.777269 ERROR: [EthernetTransport] [id=262,local=dev://sa-eth2,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down Due to this, the interests are not getting satisfied. Attached are the nfd logs of the producer(similar errors are being observed on the other nodes as well). Please if someone can help me resolving the issue. Best Regards, Navdeep Uniyal _______________________________________________ Mini-NDN mailing list Mini-NDN at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/mini-ndn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dump.sa-eth0_1.0.0.18 Type: application/octet-stream Size: 81183 bytes Desc: dump.sa-eth0_1.0.0.18 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dump.sa-eth1_1.0.0.22 Type: application/octet-stream Size: 15240 bytes Desc: dump.sa-eth1_1.0.0.22 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dump.sa-eth2_1.0.0.26 Type: application/octet-stream Size: 14459 bytes Desc: dump.sa-eth2_1.0.0.26 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ndn-traffic-client.conf Type: application/octet-stream Size: 1599 bytes Desc: ndn-traffic-client.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ndn-traffic-server.conf Type: application/octet-stream Size: 1371 bytes Desc: ndn-traffic-server.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nectopo.conf Type: application/octet-stream Size: 313 bytes Desc: nectopo.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nletest.py Type: application/octet-stream Size: 3386 bytes Desc: nletest.py URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sa.conf Type: application/octet-stream Size: 11740 bytes Desc: sa.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sa.log Type: application/octet-stream Size: 3939545 bytes Desc: sa.log URL: From nfd-call-notification at mail1.yoursunny.com Tue Apr 12 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Tue, 12 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160412 Message-ID: <201604121400.u3CE02bi019044@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From lixia at CS.UCLA.EDU Tue Apr 12 08:06:24 2016 From: lixia at CS.UCLA.EDU (Lixia Zhang) Date: Tue, 12 Apr 2016 08:06:24 -0700 Subject: [Nfd-dev] NFD call 20160412 In-Reply-To: <201604121400.u3CE02bi019044@lectura.cs.arizona.edu> References: <201604121400.u3CE02bi019044@lectura.cs.arizona.edu> Message-ID: <0A8E8930-DEA4-4645-BEBF-09E9F82579BA@cs.ucla.edu> unfortunately I will miss 3 NFD calls due to dept meetings/recruiting (4/12, 14, 19) Alex is interviewing today, so he may not make it either > On Apr 12, 2016, at 7:00 AM, NFD call notification wrote: > > Dear folks > > This is a reminder of the upcoming NFD call using Bluejeans https://bluejeans.com/760263096 . The current call time is every Tuesday/Thursday 13:00-14:00 Pacific Time. The current agenda includes the following issues: > > > > http://redmine.named-data.net/issues/2513#note-71 > Boost.Log in NFD: target Boost>=1.54, and merge the code after dropping Ubuntu 12.04 support? > > need: Alex > > > http://redmine.named-data.net/issues/3591 > NLSR and RIB Manager propagation advertisement protocol > > > http://redmine.named-data.net/issues/3232 > faces/update command implementation, need assignment > > > http://redmine.named-data.net/issues/3593 > NLSR: Handle old state in Sync digest tree > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Navdeep.Uniyal at neclab.eu Wed Apr 13 04:12:33 2016 From: Navdeep.Uniyal at neclab.eu (Navdeep Uniyal) Date: Wed, 13 Apr 2016 11:12:33 +0000 Subject: [Nfd-dev] [Mini-NDN] Interfaces went down in minindn In-Reply-To: <15421E67B274CD4AB5F6AEA46A684C3708AB30DB@PALLENE.office.hd> References: <15421E67B274CD4AB5F6AEA46A684C3708AB2D2C@PALLENE.office.hd> <15421E67B274CD4AB5F6AEA46A684C3708AB2DAE@PALLENE.office.hd> , <15421E67B274CD4AB5F6AEA46A684C3708AB2E4F@PALLENE.office.hd>, ,<15421E67B274CD4AB5F6AEA46A684C3708AB3037@PALLENE.office.hd> <15421E67B274CD4AB5F6AEA46A684C3708AB30DB@PALLENE.office.hd> Message-ID: <15421E67B274CD4AB5F6AEA46A684C3708AB31C2@PALLENE.office.hd> Hi Junxiao, Please if you can advise me on this. This issue is blocking our testing for a long time now and I am not able to rectify it. Best Regards, Navdeep Uniyal From: Mini-NDN [mailto:mini-ndn-bounces at lists.cs.ucla.edu] On Behalf Of Navdeep Uniyal Sent: Dienstag, 12. April 2016 12:29 To: Ashlesh Gawande (agawande); mini-ndn at lists.cs.ucla.edu; nfd-dev at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Hi Ashlesh, Thank you for your help. I changed the experiment as you suggested and tested it again. 1. I found interests moving from the host to the server but data not coming in. 2. Yes the occurrence of error is after quitting minindn as you said and only after quitting miniNDN, requested data is coming in and is marked as unsolicited. I could not find the issue why data generation suddenly stops as ndn dump and nfd logs could not provide proper clue. I am attaching my configuration files, logs and dump files. Also, adding nfd-dev list to this mail if someone could help in this regard. Steps to redo the experiment: 1. Install experiment nletest.py on minindn using "install.sh -i" 2. Copy topology file nectopo.conf to /{path to minindn}/ndn_utils/topologies 3. Copy traffic generator server and client files "ndn-traffic-client.conf" and "ndn-traffic-server.conf" to /{path to ndn traffic generator}/ndn-traffic-generator/build. 4. Run sudo minindn --experiment=icnle nectopo.conf 5. On Mininet CLI> ga ndn-traffic -c 1000 -i 100 /home/mininet/mini-ndn/ndn-traffic-generator/build/ndn-traffic-client.conf Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Montag, 11. April 2016 22:08 To: Navdeep Uniyal; mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Some observations: 1) I think the "interface went down" error appears when you quit Mini-NDN and NFD is killed. 2) When I tried interval as 1ms (count: 1000) - no loss for the first 4 tries, on the 5th time I see the same problem. 3) I turned on ndndump and found that sa suddenly stops responding with data - not sure why. For some time, it seems, interest do reach sa without it responding. Can you modify your experiment as below and trace interest/data and see if you can find where it fails: def setup(self): for host in self.net.hosts: for intf in host.intfNames(): ndnDumpOutputFile = "dump.%s_%s" % (intf, str(host.intf(intf).IP())) host.cmd("sudo ndndump -f '.*nle.*' -i %s > %s &" % (intf, ndnDumpOutputFile)) if host.name == 'ca': Ashlesh ________________________________ From: Navdeep Uniyal > Sent: Monday, April 11, 2016 7:37:37 AM To: Ashlesh Gawande (agawande) Subject: RE: [Mini-NDN] Interfaces went down in minindn Hi Ashlesh, Please do let me know if there are some updates on the issue. Best Regards, Navdeep Uniyal From: Navdeep Uniyal Sent: Freitag, 8. April 2016 09:48 To: 'Ashlesh Gawande (agawande)' Cc: mini-ndn at lists.cs.ucla.edu Subject: RE: [Mini-NDN] Interfaces went down in minindn Thank You Ashlesh, The results are similar for me. Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Freitag, 8. April 2016 00:52 To: Navdeep Uniyal Cc: mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Okay I looked at your experiment and modified my ndn-traffic-client/server appropriately. Initially all the interest are answered correctly. Then you start to see some timeouts. Then all interests time out. My total interest loss was ~25%. I am looking into it further. Ashlesh ________________________________ From: Mini-NDN > on behalf of Ashlesh Gawande (agawande) > Sent: Thursday, April 7, 2016 5:12 PM To: Navdeep Uniyal Cc: mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Is this ndn-traffic-client.conf the default one? Ashlesh ________________________________ From: Navdeep Uniyal > Sent: Thursday, April 7, 2016 2:58 AM To: Ashlesh Gawande (agawande) Cc: mini-ndn at lists.cs.ucla.edu; Lan Wang (lanwang) Subject: RE: [Mini-NDN] Interfaces went down in minindn Hi Ashlesh, PFA the attached files I used for experiments. "nectopo.conf" is the topology file "nletest.py" is the experiment file sudo minindn --experiment=icnle nectopo.conf On Mininet CLI > ga ndn-traffic -c 1000 -i 10 /home/mininet/mini-ndn/ndn-traffic-generator/build/ndn-traffic-client.conf Best Regards, Navdeep Uniyal From: Lan Wang (lanwang) [mailto:lanwang at memphis.edu] Sent: Donnerstag, 7. April 2016 01:22 To: Navdeep Uniyal Cc: Ashlesh Gawande (agawande); mini-ndn at lists.cs.ucla.edu Subject: Re: [Mini-NDN] Interfaces went down in minindn Ashlesh, Maybe Navdeep can give you his setup files and you repeat his experiment? Lan On Apr 6, 2016, at 10:32 AM, Navdeep Uniyal > wrote: Hi Ashlesh, Thank you for the reply. The interest sending rate is 100/sec although I checked it with 10/sec. In both the cases I found similar results. Interesting is the data generation and arrival time. The data transfer behaves perfectly fine for initial few interests while after a certain time it just stops and the requested data then is arrives just after the interfaces goes down(I guess mostly when I am stopping the Mininet topology) and are marked as unsolicited data. I could not get this behavior if it is because of link issues, NFD issue or NDN Traffic generator issue. I tried asking the solution on the other nfd-dev mailing list but could not get any response. Best Regards, Navdeep Uniyal From: Ashlesh Gawande (agawande) [mailto:agawande at memphis.edu] Sent: Mittwoch, 6. April 2016 16:40 To: Navdeep Uniyal; mini-ndn at lists.cs.ucla.edu Subject: Re: Interfaces went down in minindn What is the rate of sending interests? Can you try lowering it and see if interest drops are decreased? Ashlesh ________________________________ From: Mini-NDN > on behalf of Navdeep Uniyal > Sent: Wednesday, April 6, 2016 3:52 AM To: mini-ndn at lists.cs.ucla.edu Subject: [Mini-NDN] Interfaces went down in minindn Hi everyone, I have been running some experiments using minindn using NDN traffic generator for interest and data generation. I am facing a few issues as I am unable to get the data packets in response after a certain period of time. In my configuration I am making cs store size to be zero so that all interests can reach the producer. While investigating the issue of interest drops I found few errors in the nfd logs. Below are the few errors which I am getting in the log file: 1459932297.745268 ERROR: [EthernetTransport] [id=264,local=dev://sa-eth0,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459932297.761311 ERROR: [EthernetTransport] [id=263,local=dev://sa-eth1,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down 1459932297.777269 ERROR: [EthernetTransport] [id=262,local=dev://sa-eth2,remote=ether://[01:00:5e:00:17:aa]] pcap_next_ex failed: The interface went down Due to this, the interests are not getting satisfied. Attached are the nfd logs of the producer(similar errors are being observed on the other nodes as well). Please if someone can help me resolving the issue. Best Regards, Navdeep Uniyal _______________________________________________ Mini-NDN mailing list Mini-NDN at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/mini-ndn -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Apr 13 16:17:03 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Apr 2016 16:17:03 -0700 Subject: [Nfd-dev] Making NFD transmit a NACK In-Reply-To: References: Message-ID: <570ed36f.e2b9420a.ebff6.34a5@mx.google.com> Hi Jeff NFD can naturally reply with a Nack in two cases: Sending an Interest with same Nonce twice triggers a Nack with reason=Duplicate in response to the second transmission. Sending an Interest with a name that do not match any route triggers a Nack with reason=NoRoute, if the active strategy is best-route. Also, Teng developed a producer program that responds every Interest with a Nack. This program is in #3263 note-8 attachment, and can be modified to test different cases. A library implementation should be prepared to handle any Nack reason code (including undefined codes) and also a Nack without reason. Yours, Junxiao From: Thompson, Jeff Sent: Wednesday, April 13, 2016 16:02 To: Junxiao Shi Subject: Making NFD transmit a NACK Hi Junxiao, I?m adding support to the Common Client Libraries for NDNLPv2 network NACK. How can I test it? Can I make NFD send a NACK to the client? Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuhongchen at BIT.edu.cn Wed Apr 13 18:19:50 2016 From: zhuhongchen at BIT.edu.cn (zhuhongchen) Date: Thu, 14 Apr 2016 09:19:50 +0800 Subject: [Nfd-dev] How to add rules correctly to FIB Message-ID: <160414091950234022007414@bit.edu.cn> Dear all, I know a way to add a route using nfdc register command , now I am wondering if it is the correct way to add a million rules to FIB. Besides, if a 1GB memory is enough for a 1 million FIB; and does it make any difference whether these rules are heads for the same host or not. Many thank Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nfd-call-notification at mail1.yoursunny.com Thu Apr 14 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Thu, 14 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160414 Message-ID: <201604141400.u3EE02VU024881@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Apr 14 13:24:59 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 14 Apr 2016 13:24:59 -0700 Subject: [Nfd-dev] Dropping of Ubuntu 12.04 support In-Reply-To: References: Message-ID: Hi John 20160414 NFD call decides to drop Ubuntu 12.04 support upon next release, as per platform policy. Yours, Junxiao On Thu, Apr 7, 2016 at 8:51 AM, Junxiao Shi wrote: > Hi John > > Per platform policy > , > NDN projects are required to support the latest and previous Ubuntu LTS > releases. > As soon as Ubuntu 16.04 LTS is released, Ubuntu 12.04 would fall out of > this range and become unsupported. > Per Ubuntu 16.04 release schedule > , the exact date is > Apr 21, 2016. > > Nodes not yet upgraded to 14.04 or later can continue to run 0.4.1. Wire > formats are compatible. > > Yours, Junxiao > > On Thu, Apr 7, 2016 at 8:46 AM, Dehart, John wrote: > >> >> >> >> All: >> >> Is there a schedule for exactly when support for Ubuntu 12.04 will be >> dropped? >> The reason I ask is that most of the NDN Testbed still runs 12.04. >> It will be a major effort to get all nodes upgraded and it will >> undoubtedly >> cause a lot of disruption. >> >> John >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at wustl.edu Thu Apr 14 15:06:08 2016 From: jdd at wustl.edu (Dehart, John) Date: Thu, 14 Apr 2016 22:06:08 +0000 Subject: [Nfd-dev] Dropping of Ubuntu 12.04 support In-Reply-To: References: Message-ID: <5FEE950E-4FEC-4D6D-8A37-612C5E62F2A0@wustl.edu> Junxiao, Thanks for the update. Sorry I couldn?t make the call today. John On Apr 14, 2016, at 3:24 PM, Junxiao Shi > wrote: Hi John 20160414 NFD call decides to drop Ubuntu 12.04 support upon next release, as per platform policy. Yours, Junxiao On Thu, Apr 7, 2016 at 8:51 AM, Junxiao Shi > wrote: Hi John Per platform policy, NDN projects are required to support the latest and previous Ubuntu LTS releases. As soon as Ubuntu 16.04 LTS is released, Ubuntu 12.04 would fall out of this range and become unsupported. Per Ubuntu 16.04 release schedule, the exact date is Apr 21, 2016. Nodes not yet upgraded to 14.04 or later can continue to run 0.4.1. Wire formats are compatible. Yours, Junxiao On Thu, Apr 7, 2016 at 8:46 AM, Dehart, John > wrote: All: Is there a schedule for exactly when support for Ubuntu 12.04 will be dropped? The reason I ask is that most of the NDN Testbed still runs 12.04. It will be a major effort to get all nodes upgraded and it will undoubtedly cause a lot of disruption. John _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexni1992 at gmail.com Mon Apr 18 01:33:03 2016 From: alexni1992 at gmail.com (Alexander Ni) Date: Mon, 18 Apr 2016 17:33:03 +0900 Subject: [Nfd-dev] NFD: Error code:1, message:Timeout Message-ID: Hi all, Could somebody help me to found the reason of next error: [root at Producer1 ndn]# nfd-status 1460967791.879474 INFO: [FaceTable] Removed face id=267 remote=fd://40 local=unix:///run/nfd.sock Error code:1, message:Timeout Error code:1, message:Timeout Error code:1, message:Timeout 1460967803.531449 FATAL: [NFD] std::bad_alloc 1460967803.839138 FATAL: [NFD] std::bad_alloc ERROR: error while receiving data from socket (Connection reset by peer) As it show above error appear after I trying to check nfd status. Previously I had installed new OS version (Fedora 23) and newest version of NFD platform (0.4.1). And from that moment I wasn't able to check NFD status without getting this error. I have another machine with same OS and NFD versions and I didn't get such error over there. This error also goes together with strong freezes of OS, because RAM consumption rise to 100% at the time when nfd trying to produce output for nfd-status. Here also debug log: while I tried to get nfd-status several times: [root at Producer1 ndn]# nfd-start 1460967621.445331 INFO: [StrategyChoice] setDefaultStrategy /localhost/nfd/strategy/best-route/%FD%04 1460967621.445839 INFO: [InternalForwarderTransport] [id=0,local=internal://,remote=internal://] Creating transport 1460967621.445913 INFO: [FaceTable] Added face id=1 remote=internal:// local=internal:// 1460967621.447226 WARNING: [CommandValidator] Wildcard identity is intended for demo purpose only and SHOULD NOT be used in production environment 1460967621.447244 INFO: [CommandValidator] Giving privilege "faces" to identity wildcard 1460967621.447254 INFO: [CommandValidator] Giving privilege "fib" to identity wildcard 1460967621.447266 INFO: [CommandValidator] Giving privilege "strategy-choice" to identity wildcard 1460967621.447603 INFO: [StrategyChoice] changeStrategy(/ndn/broadcast) from /localhost/nfd/strategy/best-route/%FD%04 to /localhost/nfd/strategy/multicast/%FD%01 1460967621.447797 INFO: [StrategyChoice] changeStrategy(/localhost) from /localhost/nfd/strategy/best-route/%FD%04 to /localhost/nfd/strategy/multicast/%FD%01 1460967621.448039 INFO: [StrategyChoice] changeStrategy(/localhost/nfd) from /localhost/nfd/strategy/multicast/%FD%01 to /localhost/nfd/strategy/best-route/%FD%04 1460967621.448273 INFO: [TablesConfigSection] Setting CS max packets to 65536 1460967621.451005 INFO: [MulticastUdpTransport] [id=0,local=udp4:// 192.168.1.1:56363,remote=udp4://224.0.23.170:56363] Creating transport 1460967621.451070 INFO: [FaceTable] Added face id=256 remote=udp4:// 224.0.23.170:56363 local=udp4://192.168.1.1:56363 1460967621.451219 INFO: [MulticastUdpTransport] [id=0,local=udp4:// 203.253.235.164:56363,remote=udp4://224.0.23.170:56363] Creating transport 1460967621.451274 INFO: [FaceTable] Added face id=257 remote=udp4:// 224.0.23.170:56363 local=udp4://203.253.235.164:56363 1460967621.451370 INFO: [EthernetTransport] [id=0,local=dev://enp4s0f1,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.459711 INFO: [FaceTable] Added face id=258 remote=ether://[01:00:5e:00:17:aa] local=dev://enp4s0f1 1460967621.459768 INFO: [EthernetTransport] [id=0,local=dev://enp4s0f0,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.468604 INFO: [FaceTable] Added face id=259 remote=ether://[01:00:5e:00:17:aa] local=dev://enp4s0f0 1460967621.468665 INFO: [EthernetTransport] [id=0,local=dev://eno4,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.477496 INFO: [FaceTable] Added face id=260 remote=ether://[01:00:5e:00:17:aa] local=dev://eno4 1460967621.477564 INFO: [EthernetTransport] [id=0,local=dev://eno1,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.486329 INFO: [FaceTable] Added face id=261 remote=ether://[01:00:5e:00:17:aa] local=dev://eno1 1460967621.486394 INFO: [EthernetTransport] [id=0,local=dev://eno2,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.498327 INFO: [FaceTable] Added face id=262 remote=ether://[01:00:5e:00:17:aa] local=dev://eno2 1460967621.498389 INFO: [EthernetTransport] [id=0,local=dev://eno3,remote=ether://[01:00:5e:00:17:aa]] Creating transport 1460967621.532467 INFO: [FaceTable] Added face id=263 remote=ether://[01:00:5e:00:17:aa] local=dev://eno3 1460967621.532865 WARNING: [CommandValidator] Wildcard identity is intended for demo purpose only and SHOULD NOT be used in production environment 1460967621.532885 INFO: [CommandValidator] Giving privilege "faces" to identity wildcard 1460967621.533031 INFO: [CommandValidator] Giving privilege "fib" to identity wildcard 1460967621.533078 INFO: [CommandValidator] Giving privilege "strategy-choice" to identity wildcard 1460967621.533364 INFO: [InternalForwarderTransport] [id=0,local=null://,remote=null://] Creating transport 1460967621.533396 INFO: [FaceTable] Added face id=255 remote=null:// local=null:// 1460967622.145354 INFO: [InternalForwarderTransport] [id=0,local=contentstore://,remote=contentstore://] Creating transport 1460967622.145392 INFO: [FaceTable] Added face id=254 remote=contentstore:// local=contentstore:// 1460967622.716610 INFO: [PrivilegeHelper] dropped to effective uid=0 gid=0 1460967622.718703 INFO: [AutoPrefixPropagator] Load auto_prefix_propagate section in rib section 1460967622.718990 INFO: [AutoPrefixPropagator] Load auto_prefix_propagate section in rib section 1460967622.719025 INFO: [RibManager] Listening on: /localhost/nfd/rib 1460967622.726480 INFO: [RibManager] Start monitoring face create/destroy events 1460967622.731414 INFO: [UnixStreamTransport] [id=0,local=unix:///run/nfd.sock,remote=fd://38] Creating transport 1460967622.731446 INFO: [FaceTable] Added face id=264 remote=fd://38 local=unix:///run/nfd.sock [root at Producer1 ndn]# 1460967623.935280 INFO: [AutoPrefixPropagator] local registration only for /localhost/nfd/rib 1460967646.743111 INFO: [UnixStreamTransport] [id=0,local=unix:///run/nfd.sock,remote=fd://39] Creating transport 1460967646.743156 INFO: [FaceTable] Added face id=265 remote=fd://39 local=unix:///run/nfd.sock 1460967651.527112 WARNING: [TcpChannel] [0.0.0.0:6363] Connect failed: No route to host 1460967652.165977 INFO: [Transport] [id=265,local=unix:///run/nfd.sock,remote=fd://39] setState UP -> FAILED 1460967652.166041 INFO: [Transport] [id=265,local=unix:///run/nfd.sock,remote=fd://39] setState FAILED -> CLOSED 1460967652.735913 INFO: [FaceTable] Removed face id=265 remote=fd://39 local=unix:///run/nfd.sock 1460967654.257621 INFO: [UnixStreamTransport] [id=0,local=unix:///run/nfd.sock,remote=fd://39] Creating transport 1460967654.257666 INFO: [FaceTable] Added face id=266 remote=fd://39 local=unix:///run/nfd.sock 1460967719.151450 INFO: [Transport] [id=266,local=unix:///run/nfd.sock,remote=fd://39] setState UP -> FAILED 1460967719.151601 INFO: [UnixStreamTransport] [id=0,local=unix:///run/nfd.sock,remote=fd://40] Creating transport 1460967719.151646 INFO: [FaceTable] Added face id=267 remote=fd://40 local=unix:///run/nfd.sock 1460967719.760576 INFO: [Transport] [id=266,local=unix:///run/nfd.sock,remote=fd://39] setState FAILED -> CLOSED 1460967720.339229 INFO: [FaceTable] Removed face id=266 remote=fd://39 local=unix:///run/nfd.sock 1460967720.339489 INFO: [UnixStreamTransport] [id=0,local=unix:///run/nfd.sock,remote=fd://39] Creating transport 1460967720.339515 INFO: [FaceTable] Added face id=268 remote=fd://39 local=unix:///run/nfd.sock 1460967772.911580 INFO: [Transport] [id=267,local=unix:///run/nfd.sock,remote=fd://40] setState UP -> FAILED 1460967773.155180 INFO: [Transport] [id=267,local=unix:///run/nfd.sock,remote=fd://40] setState FAILED -> CLOSED Best Regards, Alexander Ni -------------- next part -------------- An HTML attachment was scrubbed... URL: From nfd-call-notification at mail1.yoursunny.com Tue Apr 19 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Tue, 19 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160419 Message-ID: <201604191400.u3JE02ZV031728@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From nfd-call-notification at mail1.yoursunny.com Thu Apr 21 07:00:02 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Thu, 21 Apr 2016 07:00:02 -0700 Subject: [Nfd-dev] NFD call 20160421 Message-ID: <201604211400.u3LE022p026514@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From yudiandreanp at live.com Mon Apr 25 03:13:16 2016 From: yudiandreanp at live.com (Yudi Andrean) Date: Mon, 25 Apr 2016 10:13:16 +0000 Subject: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload Message-ID: Hi all, I tried uninstalling my old 0.4.0 ubuntu 14.0.4 then installing NFD 0.4.1 ppa package, and my CPU workload increases, here is what 'top' command yields on my Ubuntu laptop: [cid:52345627-3ccf-431e-bd51-c5dcde18fc24] Strangely, the command init, python, and dbus-daemon ate CPU resource aggresively after nfd daemon started. I tried nfd-stop-ing the nfd, and the CPU workload went back to normal, before the nfd started again automatically (because of upstart? cmiiw), eating the CPU. I uninstalled nfd, reinstalled it again, but it still used so much CPU. My NFD was not acting like this before Any help on this suspected bug? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 206675 bytes Desc: pastedImage.png URL: From shijunxiao at email.arizona.edu Mon Apr 25 06:43:41 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Mon, 25 Apr 2016 06:43:41 -0700 Subject: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload In-Reply-To: References: Message-ID: <571e1f10.c661620a.88046.7363@mx.google.com> Hi Yudi I suspect some program is dynamically linked to an older ndn-cxx library that has been removed during the upgrade. When that program is setup as a system service, it would constantly crash and get restarted, causing init process to take all CPU. This problem has caused headache on my servers with unattended upgrades. My suggestion is: (1) upgrade all NDN software from PPA at the same time (eg ndn-tools), don't mix versions (2) link your own programs statically to ndn-cxx, so unattended upgrade won't accidentally break your program. Yours, Junxiao -----Original Message----- From: "Yudi Andrean" Sent: ?4/?25/?2016 3:13 To: "nfd-dev at lists.cs.ucla.edu" Subject: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload Hi all, I tried uninstalling my old 0.4.0 ubuntu 14.0.4 then installing NFD 0.4.1 ppa package, and my CPU workload increases, here is what 'top' command yields on my Ubuntu laptop: Strangely, the command init, python, and dbus-daemon ate CPU resource aggresively after nfd daemon started. I tried nfd-stop-ing the nfd, and the CPU workload went back to normal, before the nfd started again automatically (because of upstart? cmiiw), eating the CPU. I uninstalled nfd, reinstalled it again, but it still used so much CPU. My NFD was not acting like this before Any help on this suspected bug? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 206675 bytes Desc: not available URL: From lixia at CS.UCLA.EDU Mon Apr 25 11:57:17 2016 From: lixia at CS.UCLA.EDU (Lixia Zhang) Date: Mon, 25 Apr 2016 11:57:17 -0700 Subject: [Nfd-dev] Tech discussion with Van 11AM PDT Friday 4/29 Message-ID: Let me know who else I need to invite to google hangout From jefft0 at remap.UCLA.edu Mon Apr 25 12:13:15 2016 From: jefft0 at remap.UCLA.edu (Thompson, Jeff) Date: Mon, 25 Apr 2016 19:13:15 +0000 Subject: [Nfd-dev] Interest equality for for Nack Message-ID: Hello all, To support the network Nack in ndn-cxx, expressInterest takes a NackCallback and stores this along with the Interest in the library PIT. When ndn-cxx receives a Nack, it has to find the PIT entry with the ?same? interest as the interest in the Nack packet. The test for ?same? uses exact equality of the Interest wire encoding (including the nonce and all the selectors). https://github.com/named-data/ndn-cxx/blob/4b4699897cf281c08b85343a1b0d02961eb727f0/src/interest.hpp#L429 Is it guaranteed that the Interest which goes through the whole network and is returned Nack will have the exact same encoding bytes as the Interest that was originally expressed? Is this stated somewhere? Could the encoding ever be different? Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From lixia at CS.UCLA.EDU Mon Apr 25 14:46:35 2016 From: lixia at CS.UCLA.EDU (Lixia Zhang) Date: Mon, 25 Apr 2016 14:46:35 -0700 Subject: [Nfd-dev] Tech discussion with Van 11AM PDT Friday 4/29 In-Reply-To: References: Message-ID: the following msg was sent by error. My apology. > On Apr 25, 2016, at 11:57 AM, Lixia Zhang wrote: > > Let me know who else I need to invite to google hangout > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From shijunxiao at email.arizona.edu Mon Apr 25 16:40:01 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Mon, 25 Apr 2016 16:40:01 -0700 Subject: [Nfd-dev] Interest equality for for Nack In-Reply-To: References: Message-ID: Hi Jeff Nack is hop-to-hop. A Nack packet cannot be "forwarded". When NFD decides to send a Nack to a downstream, the Nack being sent is constructed from the latest Interest received from the downstream; it's not a copy of a Nack received from elsewhere. Thus, it would have the same wire encoding. >From protocol point of view, the Nack must be able to locate the correct PIT entry, which requires it to have the same Name, Selectors, and Link as the original Interest. The Nack must also carry the latest Nonce coming from the downstream. Otherwise, in case a Nack and a retransmitted Interest are in-flight at the same time between an upstream and a downstream, if the downstream ignores the Nonce and accepts the Nack, it would incorrectly conclude that the upstream cannot answer its retransmitted Interest while the upstream is actively trying to find content for its retransmitted Interest. After fixing Name, Selectors, Link, and Nonce, the only leftover field is InterestLifetime. Theoretically, the protocol could operate correctly even if the InterestLifetime in Nack differs from the InterestLifetime in Interest. In NFD implementation, InterestLifetime field never changes, so it's safe for ndn-cxx to compare the wire encoding of entire Interest. Yours, Junxiao On Mon, Apr 25, 2016 at 12:13 PM, Thompson, Jeff wrote: > Hello all, > > To support the network Nack in ndn-cxx, expressInterest takes a > NackCallback and stores this along with the Interest in the library PIT. > When ndn-cxx receives a Nack, it has to find the PIT entry with the ?same? > interest as the interest in the Nack packet. The test for ?same? uses exact > equality of the Interest wire encoding (including the nonce and all the > selectors). > > https://github.com/named-data/ndn-cxx/blob/4b4699897cf281c08b85343a1b0d02961eb727f0/src/interest.hpp#L429 > > Is it guaranteed that the Interest which goes through the whole network > and is returned Nack will have the exact same encoding bytes as the > Interest that was originally expressed? Is this stated somewhere? Could the > encoding ever be different? > > Thanks, > - Jeff T > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Mon Apr 25 16:46:19 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Mon, 25 Apr 2016 16:46:19 -0700 Subject: [Nfd-dev] Drop a Nack packet for Data? Message-ID: Hi JeffT ndn-cxx is designed for use with NFD. NFD never sends an LpPacket with Nack field and Data in the fragment, so it's unnecessary for ndn-cxx to consider this case. ndn-cxx will have undefined behavior if the *local* NFD is misbehaving. NFD, instead, is designed to connect with any NDN forwarder. NFD correctly drops an LpPacket with Nack field and Data in the fragment. https://github.com/named-data/NFD/blob/0de23a29c5c46d7134d03361244fb913159e750c/daemon/face/generic-link-service.cpp#L254-L258 Yours, Junxiao On Mon, Apr 25, 2016 at 11:45 AM, Thompson, Jeff wrote: > HI Junxiao, > > I?ve been studying the NDNLPv2 wiki page, to implement Nack in the CCL > libraries. It says "When Nack appears on an LpPacket carrying a network > layer packet other than an Interest, the packet MUST be dropped." > > It looks like ndn-cxx processes a Data packet normally, even if it was > inside a Nack packet. Should the client library drop such a packet? > > https://github.com/named-data/ndn-cxx/blob/bb64c17b389c482cb1bfec5bbc2ba13064498560/src/face.cpp#L515 > > Thanks, > - Jeff T > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudiandreanp at live.com Mon Apr 25 21:25:54 2016 From: yudiandreanp at live.com (Yudi Andrean) Date: Tue, 26 Apr 2016 04:25:54 +0000 Subject: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload In-Reply-To: <571e1f10.c661620a.88046.7363@mx.google.com> References: , <571e1f10.c661620a.88046.7363@mx.google.com> Message-ID: Junxiao, The problem is resolved now, I found that I still had ndn-cxx and ndn-tools which I built manually from sources and I forgot to update (I suspect that they were conflicting with the ones I downloaded from PPA along with nfd). I ./waf-uninstalled them and the machine's CPU went normal again. Thanks! --- Regards, Yudi ________________________________ From: Junxiao Shi Sent: Monday, April 25, 2016 8:43 PM To: Yudi Andrean; nfd-dev at lists.cs.ucla.edu Subject: RE: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload Hi Yudi I suspect some program is dynamically linked to an older ndn-cxx library that has been removed during the upgrade. When that program is setup as a system service, it would constantly crash and get restarted, causing init process to take all CPU. This problem has caused headache on my servers with unattended upgrades. My suggestion is: (1) upgrade all NDN software from PPA at the same time (eg ndn-tools), don't mix versions (2) link your own programs statically to ndn-cxx, so unattended upgrade won't accidentally break your program. Yours, Junxiao ________________________________ From: Yudi Andrean Sent: ?4/?25/?2016 3:13 To: nfd-dev at lists.cs.ucla.edu Subject: [Nfd-dev] Installing NFD 0.4.1 strangely increase CPU workload Hi all, I tried uninstalling my old 0.4.0 ubuntu 14.0.4 then installing NFD 0.4.1 ppa package, and my CPU workload increases, here is what 'top' command yields on my Ubuntu laptop: [cid:52345627-3ccf-431e-bd51-c5dcde18fc24] Strangely, the command init, python, and dbus-daemon ate CPU resource aggresively after nfd daemon started. I tried nfd-stop-ing the nfd, and the CPU workload went back to normal, before the nfd started again automatically (because of upstart? cmiiw), eating the CPU. I uninstalled nfd, reinstalled it again, but it still used so much CPU. My NFD was not acting like this before Any help on this suspected bug? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedImage.png Type: image/png Size: 206675 bytes Desc: pastedImage.png URL: From nfd-call-notification at mail1.yoursunny.com Tue Apr 26 07:00:03 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Tue, 26 Apr 2016 07:00:03 -0700 Subject: [Nfd-dev] NFD call 20160426 Message-ID: <201604261400.u3QE03Me026655@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From klaus at email.arizona.edu Tue Apr 26 21:44:30 2016 From: klaus at email.arizona.edu (Klaus Schneider) Date: Tue, 26 Apr 2016 21:44:30 -0700 Subject: [Nfd-dev] Interest equality for for Nack In-Reply-To: References: Message-ID: <572043AE.1060201@email.arizona.edu> Hi Junxiao, >> The Nack must also carry the latest Nonce coming from the downstream. >> Otherwise, in case a Nack and a retransmitted Interest are in-flight at >> the same time between an upstream and a downstream, if the downstream >> ignores the Nonce and accepts the Nack, it would incorrectly conclude >> that the upstream cannot answer its retransmitted Interest while the >> upstream is actively trying to find content for its retransmitted Interest. - Is there any specification of this design decision? - Is the "retransmitted Interest" created by the link layer (NDNLP) or by the NDN network layer? I guess that the downstream router retransmits a packet because it thinks that the upstream has lost the packet. Shouldn't this retransmission timer be much longer than it usually takes for the upstream to send a NACK back? Thus, the retransmitted Interest and the NACK are unlikely to be in-flight at the same time. You send an interest to the upstream and either get a NACK back quickly, or not at all. Moreover, whatever the reason for the original NACK was (let's say "no path"), there is a good chance that the condition will still apply for the re-transmitted interest, so the downstream would make no mistake in accepting the NACK for the old packet. (If necessary, I'll continue the discussion on the other two types of NACKs). I'm not saying that "putting the latest nonce from downstream interest into NACK packets" is a bad choice. It might just be unnecessary for most cases. Best regards, Klaus On 04/25/2016 04:40 PM, Junxiao Shi wrote: > Hi Jeff > > Nack is hop-to-hop. A Nack packet cannot be "forwarded". > When NFD decides to send a Nack to a downstream, the Nack being sent is > constructed from the latest Interest received from the downstream; it's > not a copy of a Nack received from elsewhere. Thus, it would have the > same wire encoding. > > From protocol point of view, the Nack must be able to locate the > correct PIT entry, which requires it to have the same Name, Selectors, > and Link as the original Interest. > The Nack must also carry the latest Nonce coming from the downstream. > Otherwise, in case a Nack and a retransmitted Interest are in-flight at > the same time between an upstream and a downstream, if the downstream > ignores the Nonce and accepts the Nack, it would incorrectly conclude > that the upstream cannot answer its retransmitted Interest while the > upstream is actively trying to find content for its retransmitted Interest. > > After fixing Name, Selectors, Link, and Nonce, the only leftover field > is InterestLifetime. Theoretically, the protocol could operate correctly > even if the InterestLifetime in Nack differs from the InterestLifetime > in Interest. In NFD implementation, InterestLifetime field never > changes, so it's safe for ndn-cxx to compare the wire encoding of entire > Interest. > > Yours, Junxiao > > On Mon, Apr 25, 2016 at 12:13 PM, Thompson, Jeff > wrote: > > Hello all, > > To support the network Nack in ndn-cxx, expressInterest takes a > NackCallback and stores this along with the Interest in the library > PIT. When ndn-cxx receives a Nack, it has to find the PIT entry with > the ?same? interest as the interest in the Nack packet. The test for > ?same? uses exact equality of the Interest wire encoding (including > the nonce and all the selectors). > https://github.com/named-data/ndn-cxx/blob/4b4699897cf281c08b85343a1b0d02961eb727f0/src/interest.hpp#L429 > > Is it guaranteed that the Interest which goes through the whole > network and is returned Nack will have the exact same encoding bytes > as the Interest that was originally expressed? Is this stated > somewhere? Could the encoding ever be different? > > Thanks, > - Jeff T > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From jefft0 at remap.ucla.edu Wed Apr 27 13:31:01 2016 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Wed, 27 Apr 2016 20:31:01 +0000 Subject: [Nfd-dev] expressInterest when Nack received without a NackCallback Message-ID: Hello. A question came up while studying network Nack support in ndn-cxx. Suppose someone has an application which calls expressInterest with only the OnData and OnTimeout callbacks. Normally, if the library doesn?t call OnData, it calls OnTimeout after the interest lifetime. However, if the library receives a network Nack, it calls OnTimeout immediately, regardless of the interest lifetime. Could this cause confusion? I ask because it?s not deterministic whether the network will reply with a Nack or not. If the application is not set up to handle Nacks, would it be better to just let the interest time out, even if the library receives a Nack? Thanks for any feedback, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From aa at CS.UCLA.EDU Wed Apr 27 13:49:57 2016 From: aa at CS.UCLA.EDU (Alex Afanasyev) Date: Wed, 27 Apr 2016 13:49:57 -0700 Subject: [Nfd-dev] expressInterest when Nack received without a NackCallback In-Reply-To: References: Message-ID: <3CEDBA14-62CC-4DD8-B70F-4D6565E39D03@cs.ucla.edu> > On Apr 27, 2016, at 1:31 PM, Thompson, Jeff wrote: > > Hello. > > A question came up while studying network Nack support in ndn-cxx. Suppose someone has an application which calls expressInterest with only the OnData and OnTimeout callbacks. Normally, if the library doesn?t call OnData, it calls OnTimeout after the interest lifetime. > > However, if the library receives a network Nack, it calls OnTimeout immediately, regardless of the interest lifetime. Could this cause confusion? I ask because it?s not deterministic whether the network will reply with a Nack or not. If the application is not set up to handle Nacks, would it be better to just let the interest time out, even if the library receives a Nack? > > Thanks for any feedback, > - Jeff T Of course it can cause the confusion. Depending on what logic is set for onTimeout action, it may result in unnecessary retransmission. It is not really better to "time out" when NACK is received, as the library already have a knowledge that the interest will never be received. --- Alex -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From nfd-call-notification at mail1.yoursunny.com Thu Apr 28 07:00:03 2016 From: nfd-call-notification at mail1.yoursunny.com (NFD call notification) Date: Thu, 28 Apr 2016 07:00:03 -0700 Subject: [Nfd-dev] NFD call 20160428 Message-ID: <201604281400.u3SE03pZ028728@lectura.cs.arizona.edu> An HTML attachment was scrubbed... URL: From jefft0 at remap.UCLA.edu Thu Apr 28 08:13:10 2016 From: jefft0 at remap.UCLA.edu (Thompson, Jeff) Date: Thu, 28 Apr 2016 15:13:10 +0000 Subject: [Nfd-dev] expressInterest when Nack received without a NackCallback In-Reply-To: <3CEDBA14-62CC-4DD8-B70F-4D6565E39D03@cs.ucla.edu> References: <3CEDBA14-62CC-4DD8-B70F-4D6565E39D03@cs.ucla.edu> Message-ID: > the library already have a knowledge that the interest will never be >received. Maybe the interest with the same nonce will never be received. But if the network Nack is due to temporary congestion then the same interest will work on the next retransmission (which the application should do after a delay). On 2016/4/27, 13:49:57, "Alex Afanasyev" wrote: > >> On Apr 27, 2016, at 1:31 PM, Thompson, Jeff >>wrote: >> >> Hello. >> >> A question came up while studying network Nack support in ndn-cxx. >>Suppose someone has an application which calls expressInterest with only >>the OnData and OnTimeout callbacks. Normally, if the library doesn?t >>call OnData, it calls OnTimeout after the interest lifetime. >> >> However, if the library receives a network Nack, it calls OnTimeout >>immediately, regardless of the interest lifetime. Could this cause >>confusion? I ask because it?s not deterministic whether the network will >>reply with a Nack or not. If the application is not set up to handle >>Nacks, would it be better to just let the interest time out, even if the >>library receives a Nack? >> >> Thanks for any feedback, >> - Jeff T > >Of course it can cause the confusion. Depending on what logic is set for >onTimeout action, it may result in unnecessary retransmission. It is not >really better to "time out" when NACK is received, as the library already >have a knowledge that the interest will never be received. > >--- >Alex From shijunxiao at email.arizona.edu Thu Apr 28 12:17:22 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 28 Apr 2016 12:17:22 -0700 Subject: [Nfd-dev] Interest equality for for Nack In-Reply-To: <572043AE.1060201@email.arizona.edu> References: <572043AE.1060201@email.arizona.edu> Message-ID: <2773584E-5581-401A-8D22-7D0075D04A92@email.arizona.edu> Hi Klaus How an incoming Nack matches a PIT entry is determined by NFD?s incoming Nack pipeline. Its specification is in NFD developer guide. ?retransmitted Interest? can only be created by the consumer application. Cheng Yi?s thesis has clarified the difference between ?retry? and ?retransmit?: the forwarding strategy on a forwarder can retry an Interest (without changing the Nonce); only the consumer application can retransmit an Interest (with a new Nonce). NDNLPv2 retransmission is invisible in forwarding pipelines and strategy, so it?s irrelevant to Nack. When the original Nack has reason ?duplicate?, in most cases, this condition will not apply to a retransmitted Interest. ?no route? and ?congestion? may still apply to a retransmitted Interest, but it?s better to adopt the same rule regardless of Nack type, to simplify the protocol. Yours, Junxiao > On Apr 26, 2016, at 9:44 PM, Klaus Schneider wrote: > > Hi Junxiao, > >>> The Nack must also carry the latest Nonce coming from the downstream. >>> Otherwise, in case a Nack and a retransmitted Interest are in-flight at >>> the same time between an upstream and a downstream, if the downstream >>> ignores the Nonce and accepts the Nack, it would incorrectly conclude >>> that the upstream cannot answer its retransmitted Interest while the >>> upstream is actively trying to find content for its retransmitted Interest. > > - Is there any specification of this design decision? > > - Is the "retransmitted Interest" created by the link layer (NDNLP) or by the NDN network layer? > > I guess that the downstream router retransmits a packet because it thinks that the upstream has lost the packet. Shouldn't this retransmission timer be much longer than it usually takes for the upstream to send a NACK back? Thus, the retransmitted Interest and the NACK are unlikely to be in-flight at the same time. > > You send an interest to the upstream and either get a NACK back quickly, or not at all. > > Moreover, whatever the reason for the original NACK was (let's say "no path"), there is a good chance that the condition will still apply for the re-transmitted interest, so the downstream would make no mistake in accepting the NACK for the old packet. (If necessary, I'll continue the discussion on the other two types of NACKs). > > I'm not saying that "putting the latest nonce from downstream interest into NACK packets" is a bad choice. It might just be unnecessary for most cases. > > > Best regards, > Klaus > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter at remap.ucla.edu Fri Apr 29 15:33:01 2016 From: peter at remap.ucla.edu (Gusev, Peter) Date: Fri, 29 Apr 2016 22:33:01 +0000 Subject: [Nfd-dev] NDN Congestion Control Design In-Reply-To: <56E0CF8E.3050405@cs.arizona.edu> References: <569AF5C5.70202@cs.arizona.edu> <56B15D25.6060002@cs.arizona.edu> <56CD0B74.3060702@cs.arizona.edu> <0C1BEF3F-545F-4B42-B6AA-482C4E1A169B@remap.ucla.edu> <91B3528D-4A54-4D45-BF44-FE0B087805CA@cs.ucla.edu> <9CF6A0DE-B856-467E-ADAB-A4D5337FE09C@cs.arizona.edu> <56D527D6.5020102@cs.arizona.edu> <56D5F7EF.8050406@cs.arizona.edu> <56D619DD.9070103@cs.arizona.edu> <56D61F9A.5040500@cs.arizona.edu> <5203D1AF-98B3-4C47-BFFE-E9C22B29FE34@remap.ucla.edu> <56D6499C.5020200@cs.arizona.edu> <020480B79F5DB249B89F4856EB5270EA99A5D955@EM1A.ad.ucla.edu> <56D76EC4.2030802@cs.arizona.edu> <1EDAD926-C8E4-4068-8C80-60D24E519104@remap.ucla.edu> <56D76FFE.9020106@cs.arizona.edu> <56DB41F8.2060608@cs.arizona.edu> <56E0CF8E.3050405@cs.arizona.edu> Message-ID: <0116ED64-8FFF-485A-B332-9A8CD58217D2@remap.ucla.edu> Hi Klaus & all, I?d like to check-in with you regarding your plans for congestion control implementation and whether you need any support from NDN-RTC/ndncon dev team. From the last e-mails I gathered that few things need to be fixed in NDN-RTC: > 1. Fix the RTT averaging. Instead of calculating the average over the whole run use either an exponential moving average or a simple moving average (e.g. over the last 10 seconds). > 2. Manipulate lambda_max (it looks like it is too low in some cases) or completely avoid it and set lambda_d directly based on an AIMD mechanism based on packet losses. Also take the buffer size into account. (This requires some more design decisions) > 3. Remove the retransmission suppression of the Access Strategy or set it to a value that is low enough for NDN-RTC to work correctly. Once these fixed, will an Ubuntu image with headless NDN-RTC app be useful for you to proceed with your implementation/testing? We plan on having large-scale tests using NDN-RTC somewhere mid-June. Apart from testing existing functionality scaled, it would be useful to also test any congestion control schemes (app- or network-level) at this time in preparation for large-scale demo later this fall. Please, let me know what do you think. Thanks, -- Peter Gusev peter at remap.ucla.edu +1 213 5872748 peetonn_ (skype) Software Engineer/Programmer Analyst @ REMAP UCLA Video streaming/ICN networks/Creative Development On Mar 9, 2016, at 5:36 PM, Klaus Schneider > wrote: Hi, here is a major change to the design from the previous document. The problem is the following: In the document I described to use the "time that the sojourn time remained above the target value" (further called "queuing time") as measure of the queue size. Currently, I only use a binary notion of congestion (marked/not marked). I tried to use the quantitative congestion notion ("queuing time") to influence either the reaction at the consumer or the adjustment of the forwarding ratio, but could never achieve a notable performance benefit. I just figured out the problem. Look at figures "Congestion Window" and "Congestion Marks" on slide 5 and "producer1, 0" on the last slide of the pdf. The "queuing time" is actually negatively related with the mismatch of the congestion window and the current queue size at the router. - the queuing time is highest around the time the congestion window has reached the optimal size - it is highest right before the queuing delay at the router falls below the target value (5 ms). A better metric of the amount of congestion would be "the minimum queuing delay over the last interval (100 ms)". However, I think there is a good reason why CoDel doesn't use it: It is much more complex in terms of CPU usage. Basically, it would require to calculate a minimum over all packets in the last 100ms for *every incoming packet*. The current design only needs 1 comparison (check if queuing delay > target) instead of this minimum calculation. Does anyone know how to do this efficiently? If there isn't an efficient algorithm for that, I think we should stick to a stream of binary congestion marks. These also have the benefit of lower design complexity and traffic overhead. Best regards, Klaus P.S. I would appreciate if you wouldn't post any of the discussion or design documents to redmine quite yet. I'll post it there myself (once it's ready) in a format which is more appropriate for a broader audience. That is, together with some explanations and measurement results. On 03/05/2016 01:30 PM, Klaus Schneider wrote: Hi, sorry for the delay. the short-term design should be like the one I sent around, but without the forwarding strategy part. Plus we need a different reaction at the consumer. I'll have the design ready at the retreat. We can also look into some other aspects of NDN-RTC that impact performance and try evaluating them in mini-NDN. @Peter: Do you have any suggestions for the hackathon project? Regarding the original topic: Does anyone have feedback on my design document? Best regards, Klaus -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Fri Apr 29 16:39:10 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 29 Apr 2016 16:39:10 -0700 Subject: [Nfd-dev] Receiving interests over WebSocket Message-ID: Hi JeffT NFD can forward an Interest toward the micro forwarder via WebSocket, if there's a FIB nexthop record pointing to the WebSocket face, and the forwarding strategy decides to use this nexthop. There's no technical barrier in this regard. NDN-JS browser application is already able to act as a producer, and the micro forwarder isn't different from a browser application from NFD's point of view. In order to get a FIB nexthop record pointing to the WebSocket face, the micro forwarder should send a prefix registration command to NFD under /localhop/nfd prefix, similar to NFD-RIB's automatic prefix propagation. As indicated in #3568 , the testbed certificate itself, rather than a sub-certificate, must be used to sign this prefix registration command, because the router NFD cannot fetch sub-certificates. Thus, the user has to manually request a testbed certificate from ndncert, export it into a format compatible with NDN-JS (step 1-2 , step 3 , step 4-5 ), and import it into the micro forwarder. Yours, Junxiao On Fri, Apr 29, 2016 at 4:25 PM, Thompson, Jeff wrote: > Hi Junxiao, > > As we discussed at the Hackathon, the NDN Micro Forwarder in the browser > needs to make a WebSocket connection to a remote NFD and needs to receive > interests over the same connection. > > How would you like to proceed? Is there a Redmine issue for this? (I > couldn?t find it.) Should we discuss it on an NFD call? > > Thanks, > - Jeff T > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sat Apr 30 15:40:25 2016 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 30 Apr 2016 15:40:25 -0700 Subject: [Nfd-dev] expressInterest when Nack received without a NackCallback In-Reply-To: References: Message-ID: Hi JeffT In ndn-cxx, all Face::expressInterest overloads that do not have NackCallback parameter are marked as @deprecated. Since the current logic seems working with existing applications, I think it's acceptable. I have reported bugs for any application that use the deprecated overloads. In other words, every application is required to handle Nacks. The deprecated overloads will be marked as DEPRECATED in #3334 once the bugs in existing applications are resolved. Afterwards, any new attempt to use the deprecated overloads will cause a compiler warning or error. Yours, Junxiao On Wed, Apr 27, 2016 at 1:31 PM, Thompson, Jeff wrote: > Hello. > > A question came up while studying network Nack support in ndn-cxx. Suppose > someone has an application which calls expressInterest with only the OnData > and OnTimeout callbacks. Normally, if the library doesn?t call OnData, it > calls OnTimeout after the interest lifetime. > > However, if the library receives a network Nack, it calls OnTimeout > immediately, regardless of the interest lifetime. Could this cause > confusion? I ask because it?s not deterministic whether the network will > reply with a Nack or not. If the application is not set up to handle Nacks, > would it be better to just let the interest time out, even if the library > receives a Nack? > > Thanks for any feedback, > - Jeff T > -------------- next part -------------- An HTML attachment was scrubbed... URL: