From jefft0 at remap.ucla.edu Tue Aug 5 02:51:07 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Tue, 5 Aug 2014 09:51:07 +0000 Subject: [Nfd-dev] How to treat ".." in an NDN URI? Message-ID: The TLV specification for the NDN URI scheme says "To unambiguously represent name components that would collide with the use of . and .. for relative URIs, any component that consists solely of one or more periods is encoded using three additional periods.". http://named-data.net/doc/ndn-tlv/name.html#ndn-uri-scheme If an NDN URI uses the "relative" value of "..", how should the URI be decoded. Specifically, should it be treated as "up one level" like in a Unix path? For example, should the URI "/a/b/../c" be decoded as the name "/a/c"? (This question comes from an issue on the ndn-js Redmine: http://redmine.named-data.net/issues/1818 ). Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefft0 at remap.ucla.edu Tue Aug 5 06:22:00 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Tue, 5 Aug 2014 13:22:00 +0000 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: Message-ID: ? Right now, both ndn-cxx and ndn-cpp treat ".." as a illegal encoding for a component and drop it. So, "/a/b/../c" simply becomes "/a/b/c". But ndnx (ndnd-tlv) treat ".." as "up one level" and it becomes "/a/c". The question is whether the TLV specification should spell out the correct behavior. - Jeff T From: , Jeff Thompson > Date: Tuesday, August 5, 2014 2:51 AM To: nfd-dev > Subject: How to treat ".." in an NDN URI? The TLV specification for the NDN URI scheme says "To unambiguously represent name components that would collide with the use of . and .. for relative URIs, any component that consists solely of one or more periods is encoded using three additional periods.". http://named-data.net/doc/ndn-tlv/name.html#ndn-uri-scheme If an NDN URI uses the "relative" value of "..", how should the URI be decoded. Specifically, should it be treated as "up one level" like in a Unix path? For example, should the URI "/a/b/../c" be decoded as the name "/a/c"? (This question comes from an issue on the ndn-js Redmine: http://redmine.named-data.net/issues/1818 ). Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.UCLA.EDU Tue Aug 5 08:46:24 2014 From: jburke at remap.UCLA.EDU (Burke, Jeff) Date: Tue, 5 Aug 2014 15:46:24 +0000 Subject: [Nfd-dev] NFD benchmarking results? Message-ID: Hi, We are trying to track down what is causing packet loss / delay when using ndnrtc over NFD. We will prepare something to replicate the results later this week after the Cisco visit. In the meantime, are there any benchmarks for NFD (on the testbed or not) that would give us some sense of packet processing times and expected throughput on various platforms in comparison to ndnx? I seem to recall that there was some internal testing of this awhile ago. If not, would it be possible for the NFD team to perform some basic benchmarks and comparisons? This would help us troubleshoot this problem. Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.ARIZONA.EDU Tue Aug 5 09:20:27 2014 From: shijunxiao at email.ARIZONA.EDU (Junxiao Shi) Date: Tue, 5 Aug 2014 09:20:27 -0700 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: References: Message-ID: Hi Jeff You may use ndn-traffic-generator to run benchmarks as you need. Please be sure to define a traffic pattern that reflects the reality of the application you are trying to model. Contact John DeHart if you want generic benchmark results collected on ONL testbed. Yours, Junxiao On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff wrote: > > Hi, > > We are trying to track down what is causing packet loss / delay when > using ndnrtc over NFD. We will prepare something to replicate the results > later this week after the Cisco visit. In the meantime, are there any > benchmarks for NFD (on the testbed or not) that would give us some sense of > packet processing times and expected throughput on various platforms in > comparison to ndnx? I seem to recall that there was some internal testing > of this awhile ago. If not, would it be possible for the NFD team to > perform some basic benchmarks and comparisons? This would help us > troubleshoot this problem. > > Thanks, > Jeff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.UCLA.EDU Tue Aug 5 09:22:47 2014 From: jburke at remap.UCLA.EDU (Burke, Jeff) Date: Tue, 5 Aug 2014 16:22:47 +0000 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: Message-ID: Hi Junxiao, Thanks. I was aware of this tool but asking if any benchmarks had already been generated as part of the development and testing process? Jeff From: Junxiao Shi > Date: Tue, 5 Aug 2014 09:20:27 -0700 To: Jeff Burke > Cc: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NFD benchmarking results? Hi Jeff You may use ndn-traffic-generator to run benchmarks as you need. Please be sure to define a traffic pattern that reflects the reality of the application you are trying to model. Contact John DeHart if you want generic benchmark results collected on ONL testbed. Yours, Junxiao On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff > wrote: Hi, We are trying to track down what is causing packet loss / delay when using ndnrtc over NFD. We will prepare something to replicate the results later this week after the Cisco visit. In the meantime, are there any benchmarks for NFD (on the testbed or not) that would give us some sense of packet processing times and expected throughput on various platforms in comparison to ndnx? I seem to recall that there was some internal testing of this awhile ago. If not, would it be possible for the NFD team to perform some basic benchmarks and comparisons? This would help us troubleshoot this problem. Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From dibenede at cs.colostate.edu Tue Aug 5 09:25:41 2014 From: dibenede at cs.colostate.edu (Steve DiBenedetto) Date: Tue, 5 Aug 2014 10:25:41 -0600 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: References: Message-ID: Chengyu did some performance profiling a bit ago: http://redmine.named-data.net/issues/1621. There's also a step-by-step guide for how to replicate the profiling attached to the issue. Hope that helps, Steve On Aug 5, 2014, at 10:22 AM, Burke, Jeff wrote: > Hi Junxiao, > > Thanks. I was aware of this tool but asking if any benchmarks had already been generated as part of the development and testing process? > > Jeff > > From: Junxiao Shi > Date: Tue, 5 Aug 2014 09:20:27 -0700 > To: Jeff Burke > Cc: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NFD benchmarking results? > >> Hi Jeff >> >> You may use ndn-traffic-generator to run benchmarks as you need. >> Please be sure to define a traffic pattern that reflects the reality of the application you are trying to model. >> >> Contact John DeHart if you want generic benchmark results collected on ONL testbed. >> >> Yours, Junxiao >> >> >> On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff wrote: >>> >>> Hi, >>> >>> We are trying to track down what is causing packet loss / delay when using ndnrtc over NFD. We will prepare something to replicate the results later this week after the Cisco visit. In the meantime, are there any benchmarks for NFD (on the testbed or not) that would give us some sense of packet processing times and expected throughput on various platforms in comparison to ndnx? I seem to recall that there was some internal testing of this awhile ago. If not, would it be possible for the NFD team to perform some basic benchmarks and comparisons? This would help us troubleshoot this problem. >>> >>> Thanks, >>> Jeff >>> > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Tue Aug 5 09:27:23 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Tue, 5 Aug 2014 09:27:23 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: Message-ID: Hi JeffT TLV spec cites RFC3986 for URI syntax. The processing of ".." doesn't need to be mentioned in TLV spec because it's inherited from RFC3986. RFC3986 says: The path segments "." and "..", also known as dot-segments, are defined for relative reference within the path name hierarchy. They are intended for use at the beginning of a relative-path reference to indicate relative position within the hierarchical tree of names. Therefore, if ".." appears within an absolute ndn URI, the entire URI is invalid and should raise an error. Yours, Junxiao On Tue, Aug 5, 2014 at 6:22 AM, Thompson, Jeff wrote: > ? Right now, both ndn-cxx and ndn-cpp treat ".." as a illegal encoding > for a component and drop it. So, "/a/b/../c" simply becomes "/a/b/c". But > ndnx (ndnd-tlv) treat ".." as "up one level" and it becomes "/a/c". > > The question is whether the TLV specification should spell out the > correct behavior. > > - Jeff T > > From: , Jeff Thompson > Date: Tuesday, August 5, 2014 2:51 AM > To: nfd-dev > Subject: How to treat ".." in an NDN URI? > > The TLV specification for the NDN URI scheme says "To unambiguously > represent name components that would collide with the use of . and .. for > relative URIs, any component that consists solely of one or more periods is > encoded using three additional periods.". > http://named-data.net/doc/ndn-tlv/name.html#ndn-uri-scheme > > If an NDN URI uses the "relative" value of "..", how should the URI be > decoded. Specifically, should it be treated as "up one level" like in a > Unix path? For example, should the URI "/a/b/../c" be decoded as the name > "/a/c"? > > (This question comes from an issue on the ndn-js Redmine: > http://redmine.named-data.net/issues/1818 ). > > Thanks, > - Jeff T > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Tue Aug 5 13:54:55 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Tue, 5 Aug 2014 13:54:55 -0700 Subject: [Nfd-dev] Fwd: NFD Performance testing References: <53480A24.6030905@seas.wustl.edu> Message-ID: <9A27DCCE-2F6C-43F1-9704-BBD626441955@cs.arizona.edu> This was John DeHart?s performance profiling results back in April. It was NFD 0.1 tested on ONL. Beichuan Begin forwarded message: > From: John DeHart > Subject: Re: NFD Performance testing > Date: April 11, 2014 at 8:28:36 AM MST > To: Alex Afanasyev > Cc: Beichuan Zhang , Junxiao Shi , Patrick Crowley , Haowei Yuan , "Ben Abraham, Hila" > > > Alex, > > I just re-ran my tests for forwarding interests and content for unique names > of short, medium and long lengths. > Looks like performance improved between 8% and 16% depending on length. > Before: > short: 13700 Interests/sec > medium: 9500 Interests/sec > long: 4300 Interests/sec > > Now: > short: 14900 Interests/sec > medium: 10500 Interests/sec > long: 5000 Interests/sec > > John > > On 4/10/14 8:08 PM, Alex Afanasyev wrote: >> John, we just merged a commit that optimizes hash computation. When you have time, can you try to rerun some of the evaluations and check if we have better numbers (I hope). >> >> Commit f52dac7f03ac9ba996769cf620badeeb147b6d43 (current master) includes CS fix as well, so you can just get latest code from master branch of github or gerrit. >> >> Thanks! >> >> --- >> Alex >> >> On Apr 10, 2014, at 12:37 PM, John DeHart wrote: >> >>> Beichuan, >>> >>> Yes, I have the full gprof output available. I just didn't want to email that >>> to everyone. There are some processing tools I have not tried out yet >>> that might make things easier to read but for now, here is a sample: >>> >>> http://www.arl.wustl.edu/~jdd/NDN_GPROF_RESULTS/ >>> >>> There you should see this file: gprof.out.no_content_long_names.txt >>> >>> If you search in there for "Call graph" you will get to the hierarchical view. >>> >>> If that is useful, I can put the rest of my files there also. >>> >>> John >>> >>> >>> >>> >>> On 4/10/14 2:26 PM, Beichuan Zhang wrote: >>>> Hi John, >>>> >>>> Thanks a lot! These are very informative and useful. >>>> >>>> To correlate these numbers with the code better, is it possible to get a hierarchical view of the functions and time? That'll make our analysis much easier. (Even the current flat form is already very useful.) >>>> >>>> Looking forward to the CS test results too. >>>> >>>> Beichuan >>>> >>>> >>>> On Apr 10, 2014, at 12:08 PM, John DeHart wrote: >>>> >>>>> All: >>>>> >>>>> Here is another set of data. This set takes content out of the picture. >>>>> I don't run any content servers in these tests so the Interests are never fulfilled. >>>>> I again vary the length of the names used to see that impact. >>>>> >>>>> Approximate rate nfd was able to handle: >>>>> short: 37500 Interests/sec >>>>> medium: 22400 Interests/sec >>>>> long: 7100 Interests/sec >>>>> >>>>> For the short name case I find it interesting that nfd::Cs::find() still consumes 1.67% of >>>>> our processing time. Remember there is no content ever returned so the CS should >>>>> always be empty. Not that we want to optimize anything for the case of an empty >>>>> CS, but that seems kind of high for a case where it doesn't have to do any searching. >>>>> >>>>> Again lots of usage for Block::Block(). For my amusement, for the long name case, >>>>> I gathered all the usage of all the different Block::Block() signatures at the end of this >>>>> note and tried to match them up to see which ones get used the most. Not sure it tells >>>>> us anything but since I'm not familiar with the code I was curious. >>>>> >>>>> ndn::Block::Block(ndn::Block const&) on average gets invoked 970 per Interest >>>>> for the long names. >>>>> >>>>> I've asked Jeff Burke's group for some actual names they use so I can see >>>>> if my names are of reasonable lengths. >>>>> >>>>> Next I plan to do some tests where I load the CS and then bombard it with >>>>> Interests that will always match something stored. >>>>> >>>>> Also, I should note that I am NOT seeing any signs of memory growth. >>>>> >>>>> John >>>>> >>>>> >>>>> -------------------------------------------------------------------------------- >>>>> >>>>> These tests are for an applied load of unique interests with no content returned. >>>>> This should put a load on the PIT with no load on the CS. >>>>> >>>>> Two sets of tests were run. >>>>> 1. Optimized nfd >>>>> 2. Profiled nfd >>>>> >>>>> 1. Optimized nfd tests >>>>> For these tests nfd was built with the standard default compilation options, defaultFlags = ['-O2', '-g', '-Wall'] >>>>> >>>>> The following tests were run with 128 client all routing through one central nfd router. There are no >>>>> servers to provide content for the supplied interests. There are hosts running nfd that the interests are routed to. >>>>> >>>>> 16 client hosts running 8 ndn-traffic processes and one nfd. >>>>> 16 server hosts running one nfd. >>>>> 1 router host running 1 nfd as the central router. >>>>> >>>>> Three test cases for name length: >>>>> short: /example/000 >>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>> long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>> >>>>> 128 different base names: end of name ranged from 000 to 127. >>>>> A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. >>>>> >>>>> Applied load was approximately 42000 interests/sec. >>>>> >>>>> Approximate rate nfd was able to handle: >>>>> short: 37500 Interests/sec >>>>> medium: 22400 Interests/sec >>>>> long: 7100 Interests/sec >>>>> >>>>> 2. Profiled nfd test >>>>> In order to generate gprof output, nfd is built with profile enabled, defaultFlags = ['-O2', '-pg', '-g', '-Wall']. >>>>> This obviously slows nfd down and the performance is not nearly what the optimized case shows. But what we >>>>> are interested in here is what gprof can tell us about which functions are consuming time. >>>>> >>>>> The following tests were run with 128 client/server pairs all routing through one central nfd router. >>>>> >>>>> Tests were run for 2,000,000 Interests received by nfd. Counter in pit code added to trigger call >>>>> to exit() so gmon.out could be generated. >>>>> >>>>> 16 client hosts running 8 ndn-traffic process and 1 nfd. >>>>> 16 server hosts running 8 ndn-traffic-server process and 1 nfd. >>>>> 1 router host running 1 nfd as the central router. >>>>> >>>>> Three test cases for name length: >>>>> short: /example/000 >>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>> long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>> >>>>> 128 different base names: end of name ranged from 000 to 127. >>>>> A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. >>>>> >>>>> Applied load was approximately 25500 interests/sec. >>>>> In the short, medium and long test case, the central router nfd was not able to keep up with >>>>> the applied load. >>>>> >>>>> The gprof data shown below is from the Flat profile given by gprof. I'm only showing the top consumers >>>>> that consume at least 1% of the cpu time used. >>>>> >>>>> short: >>>>> % cumulative self self total >>>>> time seconds seconds calls s/call s/call name >>>>> 19.76 5.68 5.68 741559024 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>> 13.24 9.49 3.81 150091648 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>> 7.39 11.61 2.13 156984996 0.00 0.00 ndn::Block::~Block() >>>>> 4.21 12.82 1.21 1914454 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>> 4.18 14.02 1.20 8001344 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>> 3.93 15.15 1.13 9919137 0.00 0.00 ndn::Name::toUri() const >>>>> 2.99 16.01 0.86 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) >>>>> 1.81 16.53 0.52 1916756 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>> 1.79 17.05 0.52 5914452 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>> 1.67 17.53 0.48 1999999 0.00 0.00 nfd::Cs::find(ndn::Interest const&) const >>>>> 1.67 18.01 0.48 1 0.48 27.91 boost::asio::detail::task_io_service::run(boost::system::error_code&) >>>>> 1.46 18.43 0.42 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) >>>>> 1.04 18.73 0.30 13828583 0.00 0.00 boost::detail::function::functor_manager >, boost::_bi::list2, boost::_bi::value > > > >::manage(boost::detail::function::function_buffer const&, boost::detail::function::function_buffer&, boost::detail::function::functor_manager_operation_type) >>>>> >>>>> >>>>> medium: >>>>> % cumulative self self total >>>>> time seconds seconds calls s/call s/call name >>>>> 20.50 8.60 8.60 1266365703 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>> 18.09 16.19 7.59 348085329 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>> 7.08 19.16 2.97 351410680 0.00 0.00 ndn::Block::~Block() >>>>> 6.42 21.86 2.70 20001146 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>> 5.72 24.26 2.40 21952166 0.00 0.00 ndn::Name::toUri() const >>>>> 3.13 25.57 1.32 21951129 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) >>>>> 2.86 26.77 1.20 1947681 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>> 2.29 27.73 0.96 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) >>>>> 2.24 28.67 0.94 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) >>>>> 1.99 29.51 0.84 5947679 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>> 1.50 30.14 0.63 1949983 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>> 1.49 30.76 0.63 78018430 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) >>>>> >>>>> >>>>> long: >>>>> % cumulative self self total >>>>> time seconds seconds calls ms/call ms/call name >>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>> 22.66 50.75 23.94 4280104948 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>> 8.42 59.65 8.90 57984812 0.00 0.00 ndn::Name::toUri() const >>>>> 6.59 66.61 6.97 56000552 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>> 6.56 73.54 6.93 57983775 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) >>>>> 6.48 80.39 6.85 1941813258 0.00 0.00 ndn::Block::~Block() >>>>> 3.27 83.84 3.45 2000129 0.00 0.04 nfd::NameTree::lookup(ndn::Name const&) >>>>> 2.40 86.37 2.54 282015065 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) >>>>> 2.06 88.55 2.18 5980919 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>> 1.45 90.08 1.53 564030243 0.00 0.00 __tcf_1 >>>>> 1.00 91.14 1.06 1983223 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>> >>>>> Block::Block Analysis: >>>>> Here are all the gprof listings for Block::Block from the flat profile for the long name case above. >>>>> The numbers in parens (#) are my addition to try to match this with the actual code. >>>>> >>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 (1) ndn::Block::Block(ndn::Block const&) >>>>> 0.37 96.51 0.40 64016585 0.00 0.00 (2) ndn::Block::Block(boost::shared_ptr const&, unsigned int, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&) >>>>> 0.05 102.92 0.06 62072947 0.00 0.00 (3) ndn::Block::Block(unsigned int) >>>>> 0.02 104.68 0.02 16006475 0.00 0.00 (4) ndn::Block::Block() >>>>> 0.02 105.14 0.02 (5) ndn::Block::Block(std::istream&) >>>>> 0.00 105.64 0.00 2457 0.00 0.00 (6) ndn::Block::Block(boost::shared_ptr const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, bool) >>>>> 0.00 105.64 0.00 1744 0.00 0.00 (7) ndn::Block::Block(unsigned int, boost::shared_ptr const&) >>>>> 0.00 105.64 0.00 1196 0.00 0.00 (8) ndn::Block::Block(boost::shared_ptr const&) >>>>> 0.00 105.64 0.00 517 0.00 0.00 (9) ndn::Block::Block(unsigned int, ndn::Block const&) >>>>> 0.00 105.64 0.00 1 0.00 20.00 (10) ndn::Block::Block(unsigned char const*, unsigned long) >>>>> >>>>> And here are all the constructors' signatures from block.cpp: >>>>> ndn-cpp-dev/src/encoding$ grep "Block::Block" block.cpp >>>>> (4) Block::Block() >>>>> (1) Block::Block(const EncodingBuffer& buffer) >>>>> (2) Block::Block(const ConstBufferPtr& wire, >>>>> uint32_t type, >>>>> const Buffer::const_iterator& begin, const Buffer::const_iterator& end, >>>>> const Buffer::const_iterator& valueBegin, const Buffer::const_iterator& valueEnd) >>>>> (8) Block::Block(const ConstBufferPtr& buffer) >>>>> (6) Block::Block(const ConstBufferPtr& buffer, >>>>> const Buffer::const_iterator& begin, const Buffer::const_iterator& end, >>>>> bool verifyLength/* = true*/) >>>>> (5) Block::Block(std::istream& is) >>>>> (10) Block::Block(const uint8_t* buffer, size_t maxlength) >>>>> Block::Block(const void* bufferX, size_t maxlength) >>>>> (3) Block::Block(uint32_t type) >>>>> (7) Block::Block(uint32_t type, const ConstBufferPtr& value) >>>>> (9) Block::Block(uint32_t type, const Block& value) >>>>> >>>>> >>>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Tue Aug 5 13:57:48 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Tue, 5 Aug 2014 13:57:48 -0700 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: References: Message-ID: <5FBAF50F-C71C-4294-9FB3-AE4BB0D4FB9C@cs.arizona.edu> The link gives the setup steps, but is there any performance numbers, like #pps, delay, loss, etc.? Beichuan On Aug 5, 2014, at 9:25 AM, Steve DiBenedetto wrote: > Chengyu did some performance profiling a bit ago: http://redmine.named-data.net/issues/1621. There's also a step-by-step guide for how to replicate the profiling attached to the issue. > > Hope that helps, > Steve > > On Aug 5, 2014, at 10:22 AM, Burke, Jeff wrote: > >> Hi Junxiao, >> >> Thanks. I was aware of this tool but asking if any benchmarks had already been generated as part of the development and testing process? >> >> Jeff >> >> From: Junxiao Shi >> Date: Tue, 5 Aug 2014 09:20:27 -0700 >> To: Jeff Burke >> Cc: "nfd-dev at lists.cs.ucla.edu" >> Subject: Re: [Nfd-dev] NFD benchmarking results? >> >>> Hi Jeff >>> >>> You may use ndn-traffic-generator to run benchmarks as you need. >>> Please be sure to define a traffic pattern that reflects the reality of the application you are trying to model. >>> >>> Contact John DeHart if you want generic benchmark results collected on ONL testbed. >>> >>> Yours, Junxiao >>> >>> >>> On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff wrote: >>>> >>>> Hi, >>>> >>>> We are trying to track down what is causing packet loss / delay when using ndnrtc over NFD. We will prepare something to replicate the results later this week after the Cisco visit. In the meantime, are there any benchmarks for NFD (on the testbed or not) that would give us some sense of packet processing times and expected throughput on various platforms in comparison to ndnx? I seem to recall that there was some internal testing of this awhile ago. If not, would it be possible for the NFD team to perform some basic benchmarks and comparisons? This would help us troubleshoot this problem. >>>> >>>> Thanks, >>>> Jeff >>>> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at seas.wustl.edu Tue Aug 5 18:11:53 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Tue, 5 Aug 2014 20:11:53 -0500 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: References: Message-ID: <53E180D9.8000200@seas.wustl.edu> Jeff, As others have mentioned, I did some benchmarking on a very early version of NFD. I will try to re-run those tests when I get back from vacation. Can you describe what you are seeing? Are the problems you are seeing always present or do they come and go? Do you have problems if you are just using one node? For example, if your producer and consumer are both homed off REMAP, do you still have the problem? Does the problem get worse as your producer and consumer are farther apart? John On 8/5/14, 11:22 AM, Burke, Jeff wrote: > Hi Junxiao, > > Thanks. I was aware of this tool but asking if any benchmarks had > already been generated as part of the development and testing process? > > Jeff > > From: Junxiao Shi > > Date: Tue, 5 Aug 2014 09:20:27 -0700 > To: Jeff Burke > > Cc: "nfd-dev at lists.cs.ucla.edu " > > > Subject: Re: [Nfd-dev] NFD benchmarking results? > > Hi Jeff > > You may use ndn-traffic-generator > to run > benchmarks as you need. > Please be sure to define a traffic pattern that reflects the > reality of the application you are trying to model. > > Contact John DeHart if you want generic benchmark results > collected on ONL testbed. > > Yours, Junxiao > > > On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff > wrote: > > > Hi, > > We are trying to track down what is causing packet loss / > delay when using ndnrtc over NFD. We will prepare something to > replicate the results later this week after the Cisco visit. > In the meantime, are there any benchmarks for NFD (on the > testbed or not) that would give us some sense of packet > processing times and expected throughput on various platforms > in comparison to ndnx? I seem to recall that there was some > internal testing of this awhile ago. If not, would it be > possible for the NFD team to perform some basic benchmarks and > comparisons? This would help us troubleshoot this problem. > > Thanks, > Jeff > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chengy.fan at gmail.com Tue Aug 5 18:26:49 2014 From: chengy.fan at gmail.com (Chengyu Fan) Date: Tue, 5 Aug 2014 19:26:49 -0600 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: <53E180D9.8000200@seas.wustl.edu> References: <53E180D9.8000200@seas.wustl.edu> Message-ID: John, There is a nfd task (http://redmine.named-data.net/issues/1819) created for the forwarding benchmark, and I'll do it using your script. Any suggestions? On Tue, Aug 5, 2014 at 7:11 PM, John DeHart wrote: > Jeff, > > As others have mentioned, I did some benchmarking on a very early version > of NFD. > I will try to re-run those tests when I get back from vacation. > > Can you describe what you are seeing? > Are the problems you are seeing always present or do they come and go? > Do you have problems if you are just using one node? For example, if your > producer > and consumer are both homed off REMAP, do you still have the problem? > Does the problem get worse as your producer and consumer are farther apart? > > John > > > On 8/5/14, 11:22 AM, Burke, Jeff wrote: > > Hi Junxiao, > > Thanks. I was aware of this tool but asking if any benchmarks had > already been generated as part of the development and testing process? > > Jeff > > From: Junxiao Shi > Date: Tue, 5 Aug 2014 09:20:27 -0700 > To: Jeff Burke > Cc: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NFD benchmarking results? > > Hi Jeff > > You may use ndn-traffic-generator > to run benchmarks > as you need. > Please be sure to define a traffic pattern that reflects the reality of > the application you are trying to model. > > Contact John DeHart if you want generic benchmark results collected on > ONL testbed. > > Yours, Junxiao > > > On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff wrote: > >> >> Hi, >> >> We are trying to track down what is causing packet loss / delay when >> using ndnrtc over NFD. We will prepare something to replicate the results >> later this week after the Cisco visit. In the meantime, are there any >> benchmarks for NFD (on the testbed or not) that would give us some sense of >> packet processing times and expected throughput on various platforms in >> comparison to ndnx? I seem to recall that there was some internal testing >> of this awhile ago. If not, would it be possible for the NFD team to >> perform some basic benchmarks and comparisons? This would help us >> troubleshoot this problem. >> >> Thanks, >> Jeff >> >> > > _______________________________________________ > Nfd-dev mailing listNfd-dev at lists.cs.ucla.eduhttp://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -- Thanks, Chengyu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at seas.wustl.edu Wed Aug 6 06:28:50 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Wed, 6 Aug 2014 08:28:50 -0500 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: References: <53E180D9.8000200@seas.wustl.edu> Message-ID: <53E22D92.9090504@seas.wustl.edu> Chengyu, The scripts are set up to run the installed version of nfd on ONL. Right now that is still 0.1.0. You can build your own version and modify the start_*.sh scripts to use a local copy or update your PATH variable to point to your local nfd/nrd before the installed version. The repo is at https://github.com/WU-ARL/NFD_Performance_Testing_on_ONL I'll also see if I can find time to update ONL to the latest version of nfd. John On 8/5/14, 8:26 PM, Chengyu Fan wrote: > John, > > There is a nfd task (http://redmine.named-data.net/issues/1819) > created for the forwarding benchmark, and I'll do it using your > script. Any suggestions? > > > On Tue, Aug 5, 2014 at 7:11 PM, John DeHart > wrote: > > Jeff, > > As others have mentioned, I did some benchmarking on a very early > version of NFD. > I will try to re-run those tests when I get back from vacation. > > Can you describe what you are seeing? > Are the problems you are seeing always present or do they come and go? > Do you have problems if you are just using one node? For example, > if your producer > and consumer are both homed off REMAP, do you still have the problem? > Does the problem get worse as your producer and consumer are > farther apart? > > John > > > On 8/5/14, 11:22 AM, Burke, Jeff wrote: >> Hi Junxiao, >> >> Thanks. I was aware of this tool but asking if any benchmarks >> had already been generated as part of the development and testing >> process? >> >> Jeff >> >> From: Junxiao Shi > > >> Date: Tue, 5 Aug 2014 09:20:27 -0700 >> To: Jeff Burke > >> Cc: "nfd-dev at lists.cs.ucla.edu >> " > > >> Subject: Re: [Nfd-dev] NFD benchmarking results? >> >> Hi Jeff >> >> You may use ndn-traffic-generator >> to run >> benchmarks as you need. >> Please be sure to define a traffic pattern that reflects the >> reality of the application you are trying to model. >> >> Contact John DeHart if you want generic benchmark results >> collected on ONL testbed. >> >> Yours, Junxiao >> >> >> On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff >> > wrote: >> >> >> Hi, >> >> We are trying to track down what is causing packet loss / >> delay when using ndnrtc over NFD. We will prepare >> something to replicate the results later this week after >> the Cisco visit. In the meantime, are there any >> benchmarks for NFD (on the testbed or not) that would >> give us some sense of packet processing times and >> expected throughput on various platforms in comparison to >> ndnx? I seem to recall that there was some internal >> testing of this awhile ago. If not, would it be possible >> for the NFD team to perform some basic benchmarks and >> comparisons? This would help us troubleshoot this problem. >> >> Thanks, >> Jeff >> >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > > > -- > Thanks, > > Chengyu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.UCLA.EDU Wed Aug 6 06:54:12 2014 From: jburke at remap.UCLA.EDU (Burke, Jeff) Date: Wed, 6 Aug 2014 13:54:12 +0000 Subject: [Nfd-dev] NFD benchmarking results? In-Reply-To: <53E180D9.8000200@seas.wustl.edu> Message-ID: John, Let me get the details from Peter and get back to you shortly. Thanks, Jeff From: John DeHart > Date: Tue, 5 Aug 2014 20:11:53 -0500 To: > Subject: Re: [Nfd-dev] NFD benchmarking results? Jeff, As others have mentioned, I did some benchmarking on a very early version of NFD. I will try to re-run those tests when I get back from vacation. Can you describe what you are seeing? Are the problems you are seeing always present or do they come and go? Do you have problems if you are just using one node? For example, if your producer and consumer are both homed off REMAP, do you still have the problem? Does the problem get worse as your producer and consumer are farther apart? John On 8/5/14, 11:22 AM, Burke, Jeff wrote: Hi Junxiao, Thanks. I was aware of this tool but asking if any benchmarks had already been generated as part of the development and testing process? Jeff From: Junxiao Shi > Date: Tue, 5 Aug 2014 09:20:27 -0700 To: Jeff Burke > Cc: "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] NFD benchmarking results? Hi Jeff You may use ndn-traffic-generator to run benchmarks as you need. Please be sure to define a traffic pattern that reflects the reality of the application you are trying to model. Contact John DeHart if you want generic benchmark results collected on ONL testbed. Yours, Junxiao On Tue, Aug 5, 2014 at 8:46 AM, Burke, Jeff > wrote: Hi, We are trying to track down what is causing packet loss / delay when using ndnrtc over NFD. We will prepare something to replicate the results later this week after the Cisco visit. In the meantime, are there any benchmarks for NFD (on the testbed or not) that would give us some sense of packet processing times and expected throughput on various platforms in comparison to ndnx? I seem to recall that there was some internal testing of this awhile ago. If not, would it be possible for the NFD team to perform some basic benchmarks and comparisons? This would help us troubleshoot this problem. Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.eduhttp://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefft0 at remap.ucla.edu Wed Aug 6 09:49:09 2014 From: jefft0 at remap.ucla.edu (Thompson, Jeff) Date: Wed, 6 Aug 2014 16:49:09 +0000 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: Message-ID: Hi Junxiao, Using your example of the URL for RFC3986, the following link works: http://tools.ietf.org/html/rfcblahblahblah/../rfc3986#section-3.3 So ".." is illegal in a URI, but legal in a URL? Maybe the support for .. In a URL is non-standard? - Jeff T From: Junxiao Shi > Date: Tuesday, August 5, 2014 9:27 AM To: Jeff Thompson > Cc: nfd-dev > Subject: Re: [Nfd-dev] How to treat ".." in an NDN URI? Hi JeffT TLV spec cites RFC3986 for URI syntax. The processing of ".." doesn't need to be mentioned in TLV spec because it's inherited from RFC3986. RFC3986 says: The path segments "." and "..", also known as dot-segments, are defined for relative reference within the path name hierarchy. They are intended for use at the beginning of a relative-path reference to indicate relative position within the hierarchical tree of names. Therefore, if ".." appears within an absolute ndn URI, the entire URI is invalid and should raise an error. Yours, Junxiao On Tue, Aug 5, 2014 at 6:22 AM, Thompson, Jeff > wrote: ? Right now, both ndn-cxx and ndn-cpp treat ".." as a illegal encoding for a component and drop it. So, "/a/b/../c" simply becomes "/a/b/c". But ndnx (ndnd-tlv) treat ".." as "up one level" and it becomes "/a/c". The question is whether the TLV specification should spell out the correct behavior. - Jeff T From: , Jeff Thompson > Date: Tuesday, August 5, 2014 2:51 AM To: nfd-dev > Subject: How to treat ".." in an NDN URI? The TLV specification for the NDN URI scheme says "To unambiguously represent name components that would collide with the use of . and .. for relative URIs, any component that consists solely of one or more periods is encoded using three additional periods.". http://named-data.net/doc/ndn-tlv/name.html#ndn-uri-scheme If an NDN URI uses the "relative" value of "..", how should the URI be decoded. Specifically, should it be treated as "up one level" like in a Unix path? For example, should the URI "/a/b/../c" be decoded as the name "/a/c"? (This question comes from an issue on the ndn-js Redmine: http://redmine.named-data.net/issues/1818 ). Thanks, - Jeff T -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.UCLA.EDU Thu Aug 7 06:50:27 2014 From: jburke at remap.UCLA.EDU (Burke, Jeff) Date: Thu, 7 Aug 2014 13:50:27 +0000 Subject: [Nfd-dev] Fwd: NFD Performance testing In-Reply-To: <9A27DCCE-2F6C-43F1-9704-BBD626441955@cs.arizona.edu> Message-ID: Thanks. Were there any min/avg/max latency and/or packet drop numbers? High latency (>200ms) or drops is the most likely cause of what we are seeing.) Jeff From: "bzhang at cs.arizona.edu" > Date: Tue, 5 Aug 2014 13:54:55 -0700 To: "nfd-dev at lists.cs.ucla.edu" > Subject: [Nfd-dev] Fwd: NFD Performance testing This was John DeHart?s performance profiling results back in April. It was NFD 0.1 tested on ONL. Beichuan Begin forwarded message: From: John DeHart > Subject: Re: NFD Performance testing Date: April 11, 2014 at 8:28:36 AM MST To: Alex Afanasyev > Cc: Beichuan Zhang >, Junxiao Shi >, Patrick Crowley >, Haowei Yuan >, "Ben Abraham, Hila" > Alex, I just re-ran my tests for forwarding interests and content for unique names of short, medium and long lengths. Looks like performance improved between 8% and 16% depending on length. Before: short: 13700 Interests/sec medium: 9500 Interests/sec long: 4300 Interests/sec Now: short: 14900 Interests/sec medium: 10500 Interests/sec long: 5000 Interests/sec John On 4/10/14 8:08 PM, Alex Afanasyev wrote: John, we just merged a commit that optimizes hash computation. When you have time, can you try to rerun some of the evaluations and check if we have better numbers (I hope). Commit f52dac7f03ac9ba996769cf620badeeb147b6d43 (current master) includes CS fix as well, so you can just get latest code from master branch of github or gerrit. Thanks! --- Alex On Apr 10, 2014, at 12:37 PM, John DeHart > wrote: Beichuan, Yes, I have the full gprof output available. I just didn't want to email that to everyone. There are some processing tools I have not tried out yet that might make things easier to read but for now, here is a sample: http://www.arl.wustl.edu/~jdd/NDN_GPROF_RESULTS/ There you should see this file: gprof.out.no_content_long_names.txt If you search in there for "Call graph" you will get to the hierarchical view. If that is useful, I can put the rest of my files there also. John On 4/10/14 2:26 PM, Beichuan Zhang wrote: Hi John, Thanks a lot! These are very informative and useful. To correlate these numbers with the code better, is it possible to get a hierarchical view of the functions and time? That'll make our analysis much easier. (Even the current flat form is already very useful.) Looking forward to the CS test results too. Beichuan On Apr 10, 2014, at 12:08 PM, John DeHart > wrote: All: Here is another set of data. This set takes content out of the picture. I don't run any content servers in these tests so the Interests are never fulfilled. I again vary the length of the names used to see that impact. Approximate rate nfd was able to handle: short: 37500 Interests/sec medium: 22400 Interests/sec long: 7100 Interests/sec For the short name case I find it interesting that nfd::Cs::find() still consumes 1.67% of our processing time. Remember there is no content ever returned so the CS should always be empty. Not that we want to optimize anything for the case of an empty CS, but that seems kind of high for a case where it doesn't have to do any searching. Again lots of usage for Block::Block(). For my amusement, for the long name case, I gathered all the usage of all the different Block::Block() signatures at the end of this note and tried to match them up to see which ones get used the most. Not sure it tells us anything but since I'm not familiar with the code I was curious. ndn::Block::Block(ndn::Block const&) on average gets invoked 970 per Interest for the long names. I've asked Jeff Burke's group for some actual names they use so I can see if my names are of reasonable lengths. Next I plan to do some tests where I load the CS and then bombard it with Interests that will always match something stored. Also, I should note that I am NOT seeing any signs of memory growth. John -------------------------------------------------------------------------------- These tests are for an applied load of unique interests with no content returned. This should put a load on the PIT with no load on the CS. Two sets of tests were run. 1. Optimized nfd 2. Profiled nfd 1. Optimized nfd tests For these tests nfd was built with the standard default compilation options, defaultFlags = ['-O2', '-g', '-Wall'] The following tests were run with 128 client all routing through one central nfd router. There are no servers to provide content for the supplied interests. There are hosts running nfd that the interests are routed to. 16 client hosts running 8 ndn-traffic processes and one nfd. 16 server hosts running one nfd. 1 router host running 1 nfd as the central router. Three test cases for name length: short: /example/000 medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 128 different base names: end of name ranged from 000 to 127. A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. Applied load was approximately 42000 interests/sec. Approximate rate nfd was able to handle: short: 37500 Interests/sec medium: 22400 Interests/sec long: 7100 Interests/sec 2. Profiled nfd test In order to generate gprof output, nfd is built with profile enabled, defaultFlags = ['-O2', '-pg', '-g', '-Wall']. This obviously slows nfd down and the performance is not nearly what the optimized case shows. But what we are interested in here is what gprof can tell us about which functions are consuming time. The following tests were run with 128 client/server pairs all routing through one central nfd router. Tests were run for 2,000,000 Interests received by nfd. Counter in pit code added to trigger call to exit() so gmon.out could be generated. 16 client hosts running 8 ndn-traffic process and 1 nfd. 16 server hosts running 8 ndn-traffic-server process and 1 nfd. 1 router host running 1 nfd as the central router. Three test cases for name length: short: /example/000 medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 128 different base names: end of name ranged from 000 to 127. A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. Applied load was approximately 25500 interests/sec. In the short, medium and long test case, the central router nfd was not able to keep up with the applied load. The gprof data shown below is from the Flat profile given by gprof. I'm only showing the top consumers that consume at least 1% of the cpu time used. short: % cumulative self self total time seconds seconds calls s/call s/call name 19.76 5.68 5.68 741559024 0.00 0.00 boost::detail::shared_count::~shared_count() 13.24 9.49 3.81 150091648 0.00 0.00 ndn::Block::Block(ndn::Block const&) 7.39 11.61 2.13 156984996 0.00 0.00 ndn::Block::~Block() 4.21 12.82 1.21 1914454 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) 4.18 14.02 1.20 8001344 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) 3.93 15.15 1.13 9919137 0.00 0.00 ndn::Name::toUri() const 2.99 16.01 0.86 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) 1.81 16.53 0.52 1916756 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const 1.79 17.05 0.52 5914452 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const 1.67 17.53 0.48 1999999 0.00 0.00 nfd::Cs::find(ndn::Interest const&) const 1.67 18.01 0.48 1 0.48 27.91 boost::asio::detail::task_io_service::run(boost::system::error_code&) 1.46 18.43 0.42 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) 1.04 18.73 0.30 13828583 0.00 0.00 boost::detail::function::functor_manager >, boost::_bi::list2, boost::_bi::value > > > >::manage(boost::detail::function::function_buffer const&, boost::detail::function::function_buffer&, boost::detail::function::functor_manager_operation_type) medium: % cumulative self self total time seconds seconds calls s/call s/call name 20.50 8.60 8.60 1266365703 0.00 0.00 boost::detail::shared_count::~shared_count() 18.09 16.19 7.59 348085329 0.00 0.00 ndn::Block::Block(ndn::Block const&) 7.08 19.16 2.97 351410680 0.00 0.00 ndn::Block::~Block() 6.42 21.86 2.70 20001146 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) 5.72 24.26 2.40 21952166 0.00 0.00 ndn::Name::toUri() const 3.13 25.57 1.32 21951129 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) 2.86 26.77 1.20 1947681 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) 2.29 27.73 0.96 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) 2.24 28.67 0.94 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) 1.99 29.51 0.84 5947679 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const 1.50 30.14 0.63 1949983 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const 1.49 30.76 0.63 78018430 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) long: % cumulative self self total time seconds seconds calls ms/call ms/call name 25.38 26.82 26.82 1940055896 0.00 0.00 ndn::Block::Block(ndn::Block const&) 22.66 50.75 23.94 4280104948 0.00 0.00 boost::detail::shared_count::~shared_count() 8.42 59.65 8.90 57984812 0.00 0.00 ndn::Name::toUri() const 6.59 66.61 6.97 56000552 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) 6.56 73.54 6.93 57983775 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) 6.48 80.39 6.85 1941813258 0.00 0.00 ndn::Block::~Block() 3.27 83.84 3.45 2000129 0.00 0.04 nfd::NameTree::lookup(ndn::Name const&) 2.40 86.37 2.54 282015065 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) 2.06 88.55 2.18 5980919 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const 1.45 90.08 1.53 564030243 0.00 0.00 __tcf_1 1.00 91.14 1.06 1983223 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const Block::Block Analysis: Here are all the gprof listings for Block::Block from the flat profile for the long name case above. The numbers in parens (#) are my addition to try to match this with the actual code. 25.38 26.82 26.82 1940055896 0.00 0.00 (1) ndn::Block::Block(ndn::Block const&) 0.37 96.51 0.40 64016585 0.00 0.00 (2) ndn::Block::Block(boost::shared_ptr const&, unsigned int, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&) 0.05 102.92 0.06 62072947 0.00 0.00 (3) ndn::Block::Block(unsigned int) 0.02 104.68 0.02 16006475 0.00 0.00 (4) ndn::Block::Block() 0.02 105.14 0.02 (5) ndn::Block::Block(std::istream&) 0.00 105.64 0.00 2457 0.00 0.00 (6) ndn::Block::Block(boost::shared_ptr const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, bool) 0.00 105.64 0.00 1744 0.00 0.00 (7) ndn::Block::Block(unsigned int, boost::shared_ptr const&) 0.00 105.64 0.00 1196 0.00 0.00 (8) ndn::Block::Block(boost::shared_ptr const&) 0.00 105.64 0.00 517 0.00 0.00 (9) ndn::Block::Block(unsigned int, ndn::Block const&) 0.00 105.64 0.00 1 0.00 20.00 (10) ndn::Block::Block(unsigned char const*, unsigned long) And here are all the constructors' signatures from block.cpp: ndn-cpp-dev/src/encoding$ grep "Block::Block" block.cpp (4) Block::Block() (1) Block::Block(const EncodingBuffer& buffer) (2) Block::Block(const ConstBufferPtr& wire, uint32_t type, const Buffer::const_iterator& begin, const Buffer::const_iterator& end, const Buffer::const_iterator& valueBegin, const Buffer::const_iterator& valueEnd) (8) Block::Block(const ConstBufferPtr& buffer) (6) Block::Block(const ConstBufferPtr& buffer, const Buffer::const_iterator& begin, const Buffer::const_iterator& end, bool verifyLength/* = true*/) (5) Block::Block(std::istream& is) (10) Block::Block(const uint8_t* buffer, size_t maxlength) Block::Block(const void* bufferX, size_t maxlength) (3) Block::Block(uint32_t type) (7) Block::Block(uint32_t type, const ConstBufferPtr& value) (9) Block::Block(uint32_t type, const Block& value) _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at seas.wustl.edu Thu Aug 7 06:59:11 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Thu, 7 Aug 2014 08:59:11 -0500 Subject: [Nfd-dev] Fwd: NFD Performance testing In-Reply-To: References: Message-ID: <53E3862F.4030308@seas.wustl.edu> Jeff, I was just doing throughput measurements. Nothing on latency. What is the topology of the experiments you are doing? Does your traffic go through a lot of Testbed nodes? Do you have lots of producers and consumers? We are having NLSR route stability issues that we are working on so knowing what you are trying to do will help us understand if that could be impacting you. John On 8/7/14, 8:50 AM, Burke, Jeff wrote: > Thanks. Were there any min/avg/max latency and/or packet drop numbers? > High latency (>200ms) or drops is the most likely cause of what we > are seeing.) > Jeff > > > From: "bzhang at cs.arizona.edu " > > > Date: Tue, 5 Aug 2014 13:54:55 -0700 > To: "nfd-dev at lists.cs.ucla.edu " > > > Subject: [Nfd-dev] Fwd: NFD Performance testing > > This was John DeHart's performance profiling results back in > April. It was NFD 0.1 tested on ONL. > > Beichuan > > Begin forwarded message: > >> *From: *John DeHart > >> *Subject: * *Re: NFD Performance testing* >> *Date: *April 11, 2014 at 8:28:36 AM MST >> *To: *Alex Afanasyev > > >> *Cc: *Beichuan Zhang > >, Junxiao Shi >> > >, Patrick Crowley >> >, Haowei Yuan >> >, "Ben Abraham, Hila" >> > >> >> >> Alex, >> >> I just re-ran my tests for forwarding interests and content for >> unique names >> of short, medium and long lengths. >> Looks like performance improved between 8% and 16% depending on >> length. >> Before: >> short: 13700 Interests/sec >> medium: 9500 Interests/sec >> long: 4300 Interests/sec >> >> Now: >> short: 14900 Interests/sec >> medium: 10500 Interests/sec >> long: 5000 Interests/sec >> >> John >> >> On 4/10/14 8:08 PM, Alex Afanasyev wrote: >>> John, we just merged a commit that optimizes hash computation. >>> When you have time, can you try to rerun some of the >>> evaluations and check if we have better numbers (I hope). >>> >>> Commit f52dac7f03ac9ba996769cf620badeeb147b6d43 (current master) >>> includes CS fix as well, so you can just get latest code from >>> master branch of github or gerrit. >>> >>> Thanks! >>> >>> --- >>> Alex >>> >>> On Apr 10, 2014, at 12:37 PM, John DeHart >> > wrote: >>> >>>> Beichuan, >>>> >>>> Yes, I have the full gprof output available. I just didn't want >>>> to email that >>>> to everyone. There are some processing tools I have not tried >>>> out yet >>>> that might make things easier to read but for now, here is a >>>> sample: >>>> >>>> http://www.arl.wustl.edu/~jdd/NDN_GPROF_RESULTS/ >>>> >>>> >>>> There you should see this file: >>>> gprof.out.no_content_long_names.txt >>>> >>>> If you search in there for "Call graph" you will get to the >>>> hierarchical view. >>>> >>>> If that is useful, I can put the rest of my files there also. >>>> >>>> John >>>> >>>> >>>> >>>> >>>> On 4/10/14 2:26 PM, Beichuan Zhang wrote: >>>>> Hi John, >>>>> >>>>> Thanks a lot! These are very informative and useful. >>>>> >>>>> To correlate these numbers with the code better, is it >>>>> possible to get a hierarchical view of the functions and time? >>>>> That'll make our analysis much easier. (Even the current flat >>>>> form is already very useful.) >>>>> >>>>> Looking forward to the CS test results too. >>>>> >>>>> Beichuan >>>>> >>>>> >>>>> On Apr 10, 2014, at 12:08 PM, John DeHart >>>> > wrote: >>>>> >>>>>> All: >>>>>> >>>>>> Here is another set of data. This set takes content out of >>>>>> the picture. >>>>>> I don't run any content servers in these tests so the >>>>>> Interests are never fulfilled. >>>>>> I again vary the length of the names used to see that impact. >>>>>> >>>>>> Approximate rate nfd was able to handle: >>>>>> short: 37500 Interests/sec >>>>>> medium: 22400 Interests/sec >>>>>> long: 7100 Interests/sec >>>>>> >>>>>> For the short name case I find it interesting that >>>>>> nfd::Cs::find() still consumes 1.67% of >>>>>> our processing time. Remember there is no content ever >>>>>> returned so the CS should >>>>>> always be empty. Not that we want to optimize anything for >>>>>> the case of an empty >>>>>> CS, but that seems kind of high for a case where it doesn't >>>>>> have to do any searching. >>>>>> >>>>>> Again lots of usage for Block::Block(). For my amusement, for >>>>>> the long name case, >>>>>> I gathered all the usage of all the different Block::Block() >>>>>> signatures at the end of this >>>>>> note and tried to match them up to see which ones get used >>>>>> the most. Not sure it tells >>>>>> us anything but since I'm not familiar with the code I was >>>>>> curious. >>>>>> >>>>>> ndn::Block::Block(ndn::Block const&) on average gets invoked >>>>>> 970 per Interest >>>>>> for the long names. >>>>>> >>>>>> I've asked Jeff Burke's group for some actual names they use >>>>>> so I can see >>>>>> if my names are of reasonable lengths. >>>>>> >>>>>> Next I plan to do some tests where I load the CS and then >>>>>> bombard it with >>>>>> Interests that will always match something stored. >>>>>> >>>>>> Also, I should note that I am NOT seeing any signs of memory >>>>>> growth. >>>>>> >>>>>> John >>>>>> >>>>>> >>>>>> -------------------------------------------------------------------------------- >>>>>> >>>>>> These tests are for an applied load of unique interests with >>>>>> no content returned. >>>>>> This should put a load on the PIT with no load on the CS. >>>>>> >>>>>> Two sets of tests were run. >>>>>> 1. Optimized nfd >>>>>> 2. Profiled nfd >>>>>> >>>>>> 1. Optimized nfd tests >>>>>> For these tests nfd was built with the standard default >>>>>> compilation options, defaultFlags = ['-O2', '-g', '-Wall'] >>>>>> >>>>>> The following tests were run with 128 client all routing >>>>>> through one central nfd router. There are no >>>>>> servers to provide content for the supplied interests. There >>>>>> are hosts running nfd that the interests are routed to. >>>>>> >>>>>> 16 client hosts running 8 ndn-traffic processes and one nfd. >>>>>> 16 server hosts running one nfd. >>>>>> 1 router host running 1 nfd as the central router. >>>>>> >>>>>> Three test cases for name length: >>>>>> short: /example/000 >>>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>> long: >>>>>> /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>> >>>>>> 128 different base names: end of name ranged from 000 to 127. >>>>>> A sequence number is appended by ndn-traffic to each >>>>>> Interest to force every name to be unique. >>>>>> >>>>>> Applied load was approximately 42000 interests/sec. >>>>>> >>>>>> Approximate rate nfd was able to handle: >>>>>> short: 37500 Interests/sec >>>>>> medium: 22400 Interests/sec >>>>>> long: 7100 Interests/sec >>>>>> >>>>>> 2. Profiled nfd test >>>>>> In order to generate gprof output, nfd is built with >>>>>> profile enabled, defaultFlags = ['-O2', '-pg', '-g', '-Wall']. >>>>>> This obviously slows nfd down and the performance is not >>>>>> nearly what the optimized case shows. But what we >>>>>> are interested in here is what gprof can tell us about >>>>>> which functions are consuming time. >>>>>> >>>>>> The following tests were run with 128 client/server pairs >>>>>> all routing through one central nfd router. >>>>>> >>>>>> Tests were run for 2,000,000 Interests received by nfd. >>>>>> Counter in pit code added to trigger call >>>>>> to exit() so gmon.out could be generated. >>>>>> >>>>>> 16 client hosts running 8 ndn-traffic process and 1 nfd. >>>>>> 16 server hosts running 8 ndn-traffic-server process and 1 nfd. >>>>>> 1 router host running 1 nfd as the central router. >>>>>> >>>>>> Three test cases for name length: >>>>>> short: /example/000 >>>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>> long: >>>>>> /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>> >>>>>> 128 different base names: end of name ranged from 000 to 127. >>>>>> A sequence number is appended by ndn-traffic to each >>>>>> Interest to force every name to be unique. >>>>>> >>>>>> Applied load was approximately 25500 interests/sec. >>>>>> In the short, medium and long test case, the central router >>>>>> nfd was not able to keep up with >>>>>> the applied load. >>>>>> >>>>>> The gprof data shown below is from the Flat profile given by >>>>>> gprof. I'm only showing the top consumers >>>>>> that consume at least 1% of the cpu time used. >>>>>> >>>>>> short: >>>>>> % cumulative self self total >>>>>> time seconds seconds calls s/call s/call name >>>>>> 19.76 5.68 5.68 741559024 0.00 0.00 >>>>>> boost::detail::shared_count::~shared_count() >>>>>> 13.24 9.49 3.81 150091648 0.00 0.00 >>>>>> ndn::Block::Block(ndn::Block const&) >>>>>> 7.39 11.61 2.13 156984996 0.00 0.00 >>>>>> ndn::Block::~Block() >>>>>> 4.21 12.82 1.21 1914454 0.00 0.00 >>>>>> nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>>> 4.18 14.02 1.20 8001344 0.00 0.00 >>>>>> nfd::NameTree::insert(ndn::Name const&) >>>>>> 3.93 15.15 1.13 9919137 0.00 0.00 >>>>>> ndn::Name::toUri() const >>>>>> 2.99 16.01 0.86 2000193 0.00 0.00 >>>>>> ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration>>>>> boost::ratio<1l, 1000000000l> > const&, >>>>>> boost::chrono::duration > >>>>>> const&, boost::function const&) >>>>>> 1.81 16.53 0.52 1916756 0.00 0.00 >>>>>> nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>> 1.79 17.05 0.52 5914452 0.00 0.00 >>>>>> nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, >>>>>> boost::function const&) >>>>>> const >>>>>> 1.67 17.53 0.48 1999999 0.00 0.00 >>>>>> nfd::Cs::find(ndn::Interest const&) const >>>>>> 1.67 18.01 0.48 1 0.48 27.91 >>>>>> boost::asio::detail::task_io_service::run(boost::system::error_code&) >>>>>> 1.46 18.43 0.42 2000129 0.00 0.00 >>>>>> nfd::NameTree::lookup(ndn::Name const&) >>>>>> 1.04 18.73 0.30 13828583 0.00 0.00 >>>>>> boost::detail::function::functor_manager>>>>> boost::_mfi::mf1>>>>> boost::shared_ptr >, >>>>>> boost::_bi::list2, >>>>>> boost::_bi::value > > > >>>>>> >::manage(boost::detail::function::function_buffer const&, >>>>>> boost::detail::function::function_buffer&, >>>>>> boost::detail::function::functor_manager_operation_type) >>>>>> >>>>>> >>>>>> medium: >>>>>> % cumulative self self total >>>>>> time seconds seconds calls s/call s/call name >>>>>> 20.50 8.60 8.60 1266365703 0.00 0.00 >>>>>> boost::detail::shared_count::~shared_count() >>>>>> 18.09 16.19 7.59 348085329 0.00 0.00 >>>>>> ndn::Block::Block(ndn::Block const&) >>>>>> 7.08 19.16 2.97 351410680 0.00 0.00 >>>>>> ndn::Block::~Block() >>>>>> 6.42 21.86 2.70 20001146 0.00 0.00 >>>>>> nfd::NameTree::insert(ndn::Name const&) >>>>>> 5.72 24.26 2.40 21952166 0.00 0.00 >>>>>> ndn::Name::toUri() const >>>>>> 3.13 25.57 1.32 21951129 0.00 0.00 >>>>>> nfd::name_tree::hashName(ndn::Name const&) >>>>>> 2.86 26.77 1.20 1947681 0.00 0.00 >>>>>> nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>>> 2.29 27.73 0.96 2000129 0.00 0.00 >>>>>> nfd::NameTree::lookup(ndn::Name const&) >>>>>> 2.24 28.67 0.94 2000193 0.00 0.00 >>>>>> ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration>>>>> boost::ratio<1l, 1000000000l> > const&, >>>>>> boost::chrono::duration > >>>>>> const&, boost::function const&) >>>>>> 1.99 29.51 0.84 5947679 0.00 0.00 >>>>>> nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, >>>>>> boost::function const&) >>>>>> const >>>>>> 1.50 30.14 0.63 1949983 0.00 0.00 >>>>>> nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>> 1.49 30.76 0.63 78018430 0.00 0.00 >>>>>> std::vector >>>>>> >::_M_insert_aux(__gnu_cxx::__normal_iterator>>>>> std::vector > >, >>>>>> ndn::Block const&) >>>>>> >>>>>> >>>>>> long: >>>>>> % cumulative self self total >>>>>> time seconds seconds calls ms/call ms/call name >>>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 >>>>>> ndn::Block::Block(ndn::Block const&) >>>>>> 22.66 50.75 23.94 4280104948 0.00 0.00 >>>>>> boost::detail::shared_count::~shared_count() >>>>>> 8.42 59.65 8.90 57984812 0.00 0.00 >>>>>> ndn::Name::toUri() const >>>>>> 6.59 66.61 6.97 56000552 0.00 0.00 >>>>>> nfd::NameTree::insert(ndn::Name const&) >>>>>> 6.56 73.54 6.93 57983775 0.00 0.00 >>>>>> nfd::name_tree::hashName(ndn::Name const&) >>>>>> 6.48 80.39 6.85 1941813258 0.00 0.00 >>>>>> ndn::Block::~Block() >>>>>> 3.27 83.84 3.45 2000129 0.00 0.04 >>>>>> nfd::NameTree::lookup(ndn::Name const&) >>>>>> 2.40 86.37 2.54 282015065 0.00 0.00 >>>>>> std::vector >>>>>> >::_M_insert_aux(__gnu_cxx::__normal_iterator>>>>> std::vector > >, >>>>>> ndn::Block const&) >>>>>> 2.06 88.55 2.18 5980919 0.00 0.00 >>>>>> nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, >>>>>> boost::function const&) >>>>>> const >>>>>> 1.45 90.08 1.53 564030243 0.00 0.00 __tcf_1 >>>>>> 1.00 91.14 1.06 1983223 0.00 0.00 >>>>>> nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>> >>>>>> Block::Block Analysis: >>>>>> Here are all the gprof listings for Block::Block from the >>>>>> flat profile for the long name case above. >>>>>> The numbers in parens (#) are my addition to try to match >>>>>> this with the actual code. >>>>>> >>>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 (1) >>>>>> ndn::Block::Block(ndn::Block const&) >>>>>> 0.37 96.51 0.40 64016585 0.00 0.00 (2) >>>>>> ndn::Block::Block(boost::shared_ptr >>>>>> const&, unsigned int, __gnu_cxx::__normal_iterator>>>>> char const*, std::vector>>>>> std::allocator > > const&, >>>>>> __gnu_cxx::__normal_iterator>>>>> std::vector > > >>>>>> const&, __gnu_cxx::__normal_iterator>>>>> std::vector > > >>>>>> const&, __gnu_cxx::__normal_iterator>>>>> std::vector > > >>>>>> const&) >>>>>> 0.05 102.92 0.06 62072947 0.00 0.00 (3) >>>>>> ndn::Block::Block(unsigned int) >>>>>> 0.02 104.68 0.02 16006475 0.00 0.00 (4) >>>>>> ndn::Block::Block() >>>>>> 0.02 105.14 0.02 (5) >>>>>> ndn::Block::Block(std::istream&) >>>>>> 0.00 105.64 0.00 2457 0.00 0.00 (6) >>>>>> ndn::Block::Block(boost::shared_ptr >>>>>> const&, __gnu_cxx::__normal_iterator>>>>> std::vector > > >>>>>> const&, __gnu_cxx::__normal_iterator>>>>> std::vector > > >>>>>> const&, bool) >>>>>> 0.00 105.64 0.00 1744 0.00 0.00 (7) >>>>>> ndn::Block::Block(unsigned int, boost::shared_ptr>>>>> const> const&) >>>>>> 0.00 105.64 0.00 1196 0.00 0.00 (8) >>>>>> ndn::Block::Block(boost::shared_ptr const&) >>>>>> 0.00 105.64 0.00 517 0.00 0.00 (9) >>>>>> ndn::Block::Block(unsigned int, ndn::Block const&) >>>>>> 0.00 105.64 0.00 1 0.00 20.00 (10) >>>>>> ndn::Block::Block(unsigned char const*, unsigned long) >>>>>> >>>>>> And here are all the constructors' signatures from block.cpp: >>>>>> ndn-cpp-dev/src/encoding$ grep "Block::Block" block.cpp >>>>>> (4) Block::Block() >>>>>> (1) Block::Block(const EncodingBuffer& buffer) >>>>>> (2) Block::Block(const ConstBufferPtr& wire, >>>>>> uint32_t type, >>>>>> const Buffer::const_iterator& begin, const >>>>>> Buffer::const_iterator& end, >>>>>> const Buffer::const_iterator& valueBegin, const >>>>>> Buffer::const_iterator& valueEnd) >>>>>> (8) Block::Block(const ConstBufferPtr& buffer) >>>>>> (6) Block::Block(const ConstBufferPtr& buffer, >>>>>> const Buffer::const_iterator& begin, const >>>>>> Buffer::const_iterator& end, >>>>>> bool verifyLength/* = true*/) >>>>>> (5) Block::Block(std::istream& is) >>>>>> (10) Block::Block(const uint8_t* buffer, size_t maxlength) >>>>>> Block::Block(const void* bufferX, size_t maxlength) >>>>>> (3) Block::Block(uint32_t type) >>>>>> (7) Block::Block(uint32_t type, const ConstBufferPtr& value) >>>>>> (9) Block::Block(uint32_t type, const Block& value) >>>>>> >>>>>> >>>>>> >> > > _______________________________________________ Nfd-dev mailing > list Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.ARIZONA.EDU Thu Aug 7 08:01:16 2014 From: bzhang at cs.ARIZONA.EDU (Beichuan Zhang) Date: Thu, 7 Aug 2014 08:01:16 -0700 Subject: [Nfd-dev] Fwd: NFD Performance testing In-Reply-To: References: Message-ID: <13FD8D59-DE09-463C-A846-6C78DF2E1AE3@cs.arizona.edu> A task, http://redmine.named-data.net/issues/1819, was created yesterday to get these numbers. Chengyu from CSU will do the measurement before John comes back. Beichuan On Aug 7, 2014, at 6:50 AM, Burke, Jeff wrote: > Thanks. Were there any min/avg/max latency and/or packet drop numbers? High latency (>200ms) or drops is the most likely cause of what we are seeing.) > Jeff > > > From: "bzhang at cs.arizona.edu" > Date: Tue, 5 Aug 2014 13:54:55 -0700 > To: "nfd-dev at lists.cs.ucla.edu" > Subject: [Nfd-dev] Fwd: NFD Performance testing > >> This was John DeHart?s performance profiling results back in April. It was NFD 0.1 tested on ONL. >> >> Beichuan >> >> Begin forwarded message: >> >>> From: John DeHart >>> Subject: Re: NFD Performance testing >>> Date: April 11, 2014 at 8:28:36 AM MST >>> To: Alex Afanasyev >>> Cc: Beichuan Zhang , Junxiao Shi , Patrick Crowley , Haowei Yuan , "Ben Abraham, Hila" >>> >>> >>> Alex, >>> >>> I just re-ran my tests for forwarding interests and content for unique names >>> of short, medium and long lengths. >>> Looks like performance improved between 8% and 16% depending on length. >>> Before: >>> short: 13700 Interests/sec >>> medium: 9500 Interests/sec >>> long: 4300 Interests/sec >>> >>> Now: >>> short: 14900 Interests/sec >>> medium: 10500 Interests/sec >>> long: 5000 Interests/sec >>> >>> John >>> >>> On 4/10/14 8:08 PM, Alex Afanasyev wrote: >>>> John, we just merged a commit that optimizes hash computation. When you have time, can you try to rerun some of the evaluations and check if we have better numbers (I hope). >>>> >>>> Commit f52dac7f03ac9ba996769cf620badeeb147b6d43 (current master) includes CS fix as well, so you can just get latest code from master branch of github or gerrit. >>>> >>>> Thanks! >>>> >>>> --- >>>> Alex >>>> >>>> On Apr 10, 2014, at 12:37 PM, John DeHart wrote: >>>> >>>>> Beichuan, >>>>> >>>>> Yes, I have the full gprof output available. I just didn't want to email that >>>>> to everyone. There are some processing tools I have not tried out yet >>>>> that might make things easier to read but for now, here is a sample: >>>>> >>>>> http://www.arl.wustl.edu/~jdd/NDN_GPROF_RESULTS/ >>>>> >>>>> There you should see this file: gprof.out.no_content_long_names.txt >>>>> >>>>> If you search in there for "Call graph" you will get to the hierarchical view. >>>>> >>>>> If that is useful, I can put the rest of my files there also. >>>>> >>>>> John >>>>> >>>>> >>>>> >>>>> >>>>> On 4/10/14 2:26 PM, Beichuan Zhang wrote: >>>>>> Hi John, >>>>>> >>>>>> Thanks a lot! These are very informative and useful. >>>>>> >>>>>> To correlate these numbers with the code better, is it possible to get a hierarchical view of the functions and time? That'll make our analysis much easier. (Even the current flat form is already very useful.) >>>>>> >>>>>> Looking forward to the CS test results too. >>>>>> >>>>>> Beichuan >>>>>> >>>>>> >>>>>> On Apr 10, 2014, at 12:08 PM, John DeHart wrote: >>>>>> >>>>>>> All: >>>>>>> >>>>>>> Here is another set of data. This set takes content out of the picture. >>>>>>> I don't run any content servers in these tests so the Interests are never fulfilled. >>>>>>> I again vary the length of the names used to see that impact. >>>>>>> >>>>>>> Approximate rate nfd was able to handle: >>>>>>> short: 37500 Interests/sec >>>>>>> medium: 22400 Interests/sec >>>>>>> long: 7100 Interests/sec >>>>>>> >>>>>>> For the short name case I find it interesting that nfd::Cs::find() still consumes 1.67% of >>>>>>> our processing time. Remember there is no content ever returned so the CS should >>>>>>> always be empty. Not that we want to optimize anything for the case of an empty >>>>>>> CS, but that seems kind of high for a case where it doesn't have to do any searching. >>>>>>> >>>>>>> Again lots of usage for Block::Block(). For my amusement, for the long name case, >>>>>>> I gathered all the usage of all the different Block::Block() signatures at the end of this >>>>>>> note and tried to match them up to see which ones get used the most. Not sure it tells >>>>>>> us anything but since I'm not familiar with the code I was curious. >>>>>>> >>>>>>> ndn::Block::Block(ndn::Block const&) on average gets invoked 970 per Interest >>>>>>> for the long names. >>>>>>> >>>>>>> I've asked Jeff Burke's group for some actual names they use so I can see >>>>>>> if my names are of reasonable lengths. >>>>>>> >>>>>>> Next I plan to do some tests where I load the CS and then bombard it with >>>>>>> Interests that will always match something stored. >>>>>>> >>>>>>> Also, I should note that I am NOT seeing any signs of memory growth. >>>>>>> >>>>>>> John >>>>>>> >>>>>>> >>>>>>> -------------------------------------------------------------------------------- >>>>>>> >>>>>>> These tests are for an applied load of unique interests with no content returned. >>>>>>> This should put a load on the PIT with no load on the CS. >>>>>>> >>>>>>> Two sets of tests were run. >>>>>>> 1. Optimized nfd >>>>>>> 2. Profiled nfd >>>>>>> >>>>>>> 1. Optimized nfd tests >>>>>>> For these tests nfd was built with the standard default compilation options, defaultFlags = ['-O2', '-g', '-Wall'] >>>>>>> >>>>>>> The following tests were run with 128 client all routing through one central nfd router. There are no >>>>>>> servers to provide content for the supplied interests. There are hosts running nfd that the interests are routed to. >>>>>>> >>>>>>> 16 client hosts running 8 ndn-traffic processes and one nfd. >>>>>>> 16 server hosts running one nfd. >>>>>>> 1 router host running 1 nfd as the central router. >>>>>>> >>>>>>> Three test cases for name length: >>>>>>> short: /example/000 >>>>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>>> long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>>> >>>>>>> 128 different base names: end of name ranged from 000 to 127. >>>>>>> A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. >>>>>>> >>>>>>> Applied load was approximately 42000 interests/sec. >>>>>>> >>>>>>> Approximate rate nfd was able to handle: >>>>>>> short: 37500 Interests/sec >>>>>>> medium: 22400 Interests/sec >>>>>>> long: 7100 Interests/sec >>>>>>> >>>>>>> 2. Profiled nfd test >>>>>>> In order to generate gprof output, nfd is built with profile enabled, defaultFlags = ['-O2', '-pg', '-g', '-Wall']. >>>>>>> This obviously slows nfd down and the performance is not nearly what the optimized case shows. But what we >>>>>>> are interested in here is what gprof can tell us about which functions are consuming time. >>>>>>> >>>>>>> The following tests were run with 128 client/server pairs all routing through one central nfd router. >>>>>>> >>>>>>> Tests were run for 2,000,000 Interests received by nfd. Counter in pit code added to trigger call >>>>>>> to exit() so gmon.out could be generated. >>>>>>> >>>>>>> 16 client hosts running 8 ndn-traffic process and 1 nfd. >>>>>>> 16 server hosts running 8 ndn-traffic-server process and 1 nfd. >>>>>>> 1 router host running 1 nfd as the central router. >>>>>>> >>>>>>> Three test cases for name length: >>>>>>> short: /example/000 >>>>>>> medium: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>>> long: /example/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/ABCDE/FGHIJ/KLMNO/PQRST/UVWXY/Z/000 >>>>>>> >>>>>>> 128 different base names: end of name ranged from 000 to 127. >>>>>>> A sequence number is appended by ndn-traffic to each Interest to force every name to be unique. >>>>>>> >>>>>>> Applied load was approximately 25500 interests/sec. >>>>>>> In the short, medium and long test case, the central router nfd was not able to keep up with >>>>>>> the applied load. >>>>>>> >>>>>>> The gprof data shown below is from the Flat profile given by gprof. I'm only showing the top consumers >>>>>>> that consume at least 1% of the cpu time used. >>>>>>> >>>>>>> short: >>>>>>> % cumulative self self total >>>>>>> time seconds seconds calls s/call s/call name >>>>>>> 19.76 5.68 5.68 741559024 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>>>> 13.24 9.49 3.81 150091648 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>>>> 7.39 11.61 2.13 156984996 0.00 0.00 ndn::Block::~Block() >>>>>>> 4.21 12.82 1.21 1914454 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>>>> 4.18 14.02 1.20 8001344 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>>>> 3.93 15.15 1.13 9919137 0.00 0.00 ndn::Name::toUri() const >>>>>>> 2.99 16.01 0.86 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) >>>>>>> 1.81 16.53 0.52 1916756 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>>> 1.79 17.05 0.52 5914452 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>>>> 1.67 17.53 0.48 1999999 0.00 0.00 nfd::Cs::find(ndn::Interest const&) const >>>>>>> 1.67 18.01 0.48 1 0.48 27.91 boost::asio::detail::task_io_service::run(boost::system::error_code&) >>>>>>> 1.46 18.43 0.42 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) >>>>>>> 1.04 18.73 0.30 13828583 0.00 0.00 boost::detail::function::functor_manager >, boost::_bi::list2, boost::_bi::value > > > >::manage(boost::detail::function::function_buffer const&, boost::detail::function::function_buffer&, boost::detail::function::functor_manager_operation_type) >>>>>>> >>>>>>> >>>>>>> medium: >>>>>>> % cumulative self self total >>>>>>> time seconds seconds calls s/call s/call name >>>>>>> 20.50 8.60 8.60 1266365703 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>>>> 18.09 16.19 7.59 348085329 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>>>> 7.08 19.16 2.97 351410680 0.00 0.00 ndn::Block::~Block() >>>>>>> 6.42 21.86 2.70 20001146 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>>>> 5.72 24.26 2.40 21952166 0.00 0.00 ndn::Name::toUri() const >>>>>>> 3.13 25.57 1.32 21951129 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) >>>>>>> 2.86 26.77 1.20 1947681 0.00 0.00 nfd::NameTree::eraseEntryIfEmpty(boost::shared_ptr) >>>>>>> 2.29 27.73 0.96 2000129 0.00 0.00 nfd::NameTree::lookup(ndn::Name const&) >>>>>>> 2.24 28.67 0.94 2000193 0.00 0.00 ndn::Scheduler::schedulePeriodicEvent(boost::chrono::duration > const&, boost::chrono::duration > const&, boost::function const&) >>>>>>> 1.99 29.51 0.84 5947679 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>>>> 1.50 30.14 0.63 1949983 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>>> 1.49 30.76 0.63 78018430 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) >>>>>>> >>>>>>> >>>>>>> long: >>>>>>> % cumulative self self total >>>>>>> time seconds seconds calls ms/call ms/call name >>>>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 ndn::Block::Block(ndn::Block const&) >>>>>>> 22.66 50.75 23.94 4280104948 0.00 0.00 boost::detail::shared_count::~shared_count() >>>>>>> 8.42 59.65 8.90 57984812 0.00 0.00 ndn::Name::toUri() const >>>>>>> 6.59 66.61 6.97 56000552 0.00 0.00 nfd::NameTree::insert(ndn::Name const&) >>>>>>> 6.56 73.54 6.93 57983775 0.00 0.00 nfd::name_tree::hashName(ndn::Name const&) >>>>>>> 6.48 80.39 6.85 1941813258 0.00 0.00 ndn::Block::~Block() >>>>>>> 3.27 83.84 3.45 2000129 0.00 0.04 nfd::NameTree::lookup(ndn::Name const&) >>>>>>> 2.40 86.37 2.54 282015065 0.00 0.00 std::vector >::_M_insert_aux(__gnu_cxx::__normal_iterator > >, ndn::Block const&) >>>>>>> 2.06 88.55 2.18 5980919 0.00 0.00 nfd::NameTree::findLongestPrefixMatch(boost::shared_ptr, boost::function const&) const >>>>>>> 1.45 90.08 1.53 564030243 0.00 0.00 __tcf_1 >>>>>>> 1.00 91.14 1.06 1983223 0.00 0.00 nfd::NameTree::findExactMatch(ndn::Name const&) const >>>>>>> >>>>>>> Block::Block Analysis: >>>>>>> Here are all the gprof listings for Block::Block from the flat profile for the long name case above. >>>>>>> The numbers in parens (#) are my addition to try to match this with the actual code. >>>>>>> >>>>>>> 25.38 26.82 26.82 1940055896 0.00 0.00 (1) ndn::Block::Block(ndn::Block const&) >>>>>>> 0.37 96.51 0.40 64016585 0.00 0.00 (2) ndn::Block::Block(boost::shared_ptr const&, unsigned int, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&) >>>>>>> 0.05 102.92 0.06 62072947 0.00 0.00 (3) ndn::Block::Block(unsigned int) >>>>>>> 0.02 104.68 0.02 16006475 0.00 0.00 (4) ndn::Block::Block() >>>>>>> 0.02 105.14 0.02 (5) ndn::Block::Block(std::istream&) >>>>>>> 0.00 105.64 0.00 2457 0.00 0.00 (6) ndn::Block::Block(boost::shared_ptr const&, __gnu_cxx::__normal_iterator > > const&, __gnu_cxx::__normal_iterator > > const&, bool) >>>>>>> 0.00 105.64 0.00 1744 0.00 0.00 (7) ndn::Block::Block(unsigned int, boost::shared_ptr const&) >>>>>>> 0.00 105.64 0.00 1196 0.00 0.00 (8) ndn::Block::Block(boost::shared_ptr const&) >>>>>>> 0.00 105.64 0.00 517 0.00 0.00 (9) ndn::Block::Block(unsigned int, ndn::Block const&) >>>>>>> 0.00 105.64 0.00 1 0.00 20.00 (10) ndn::Block::Block(unsigned char const*, unsigned long) >>>>>>> >>>>>>> And here are all the constructors' signatures from block.cpp: >>>>>>> ndn-cpp-dev/src/encoding$ grep "Block::Block" block.cpp >>>>>>> (4) Block::Block() >>>>>>> (1) Block::Block(const EncodingBuffer& buffer) >>>>>>> (2) Block::Block(const ConstBufferPtr& wire, >>>>>>> uint32_t type, >>>>>>> const Buffer::const_iterator& begin, const Buffer::const_iterator& end, >>>>>>> const Buffer::const_iterator& valueBegin, const Buffer::const_iterator& valueEnd) >>>>>>> (8) Block::Block(const ConstBufferPtr& buffer) >>>>>>> (6) Block::Block(const ConstBufferPtr& buffer, >>>>>>> const Buffer::const_iterator& begin, const Buffer::const_iterator& end, >>>>>>> bool verifyLength/* = true*/) >>>>>>> (5) Block::Block(std::istream& is) >>>>>>> (10) Block::Block(const uint8_t* buffer, size_t maxlength) >>>>>>> Block::Block(const void* bufferX, size_t maxlength) >>>>>>> (3) Block::Block(uint32_t type) >>>>>>> (7) Block::Block(uint32_t type, const ConstBufferPtr& value) >>>>>>> (9) Block::Block(uint32_t type, const Block& value) >>>>>>> >>>>>>> >>>>>>> >>> >> >> _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sun Aug 10 21:35:20 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 10 Aug 2014 21:35:20 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: Message-ID: Dear folks I looked at RFC3986 again. Section 6.2.2.3. Path Segment Normalization states: The complete path segments "." and ".." are intended only for use within relative references (Section 4.1) and are removed as part of the reference resolution process (Section 5.2). However, some deployed implementations incorrectly assume that reference resolution is not necessary when the reference is already a URI and thus fail to remove dot-segments when they occur in non-relative paths. This implies that "." and ".." are both permitted in absolute URIs. The exact rules are defines in Section 5.2. Therefore, these ndn: URIs are all equivalent: - /A/B/C - ndn:/A/B/C - ndn:///A/B/C - ndn://authority/A/B/C - /A/./B/C - /A/D/../B/C - /../A/B/C - /./A/B/C Yours, Junxiao On Wed, Aug 6, 2014 at 9:49 AM, Thompson, Jeff wrote: > Hi Junxiao, > > Using your example of the URL for RFC3986, the following link works: > http://tools.ietf.org/html/rfcblahblahblah/../rfc3986#section-3.3 > > So ".." is illegal in a URI, but legal in a URL? Maybe the support for > .. In a URL is non-standard? > > - Jeff T > > > From: Junxiao Shi > Date: Tuesday, August 5, 2014 9:27 AM > To: Jeff Thompson > Cc: nfd-dev > Subject: Re: [Nfd-dev] How to treat ".." in an NDN URI? > > Hi JeffT > > TLV spec cites RFC3986 for URI syntax. The processing of ".." doesn't > need to be mentioned in TLV spec because it's inherited from RFC3986. > > RFC3986 says: > > The path segments "." and "..", also known as dot-segments, are defined > for relative reference within the path name hierarchy. They are intended > for use at the beginning of a relative-path reference to indicate relative > position within the hierarchical tree of names. > > Therefore, if ".." appears within an absolute ndn URI, the entire URI is > invalid and should raise an error. > > Yours, Junxiao > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Sun Aug 10 21:57:14 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Sun, 10 Aug 2014 21:57:14 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: Message-ID: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> This is complicate. In what scenarios do we need to use ?up to a level? as an NDN name component? Can we treat ?.? and ?..? as is without the special meaning of ?this level? and ?up to a level?? I?m also fine with treating them as illegal name components. Beichuan On Aug 10, 2014, at 9:35 PM, Junxiao Shi wrote: > Dear folks > > I looked at RFC3986 again. > > Section 6.2.2.3. Path Segment Normalization states: > The complete path segments "." and ".." are intended only for use within relative references (Section 4.1) and are removed as part of the reference resolution process (Section 5.2). However, some deployed implementations incorrectly assume that reference resolution is not necessary when the reference is already a URI and thus fail to remove dot-segments when they occur in non-relative paths. > This implies that "." and ".." are both permitted in absolute URIs. > The exact rules are defines in Section 5.2. > > Therefore, these ndn: URIs are all equivalent: > /A/B/C > ndn:/A/B/C > ndn:///A/B/C > ndn://authority/A/B/C > /A/./B/C > /A/D/../B/C > /../A/B/C > /./A/B/C > > Yours, Junxiao > > On Wed, Aug 6, 2014 at 9:49 AM, Thompson, Jeff wrote: > Hi Junxiao, > > Using your example of the URL for RFC3986, the following link works: > http://tools.ietf.org/html/rfcblahblahblah/../rfc3986#section-3.3 > > So ".." is illegal in a URI, but legal in a URL? Maybe the support for .. In a URL is non-standard? > > - Jeff T > > > From: Junxiao Shi > Date: Tuesday, August 5, 2014 9:27 AM > To: Jeff Thompson > Cc: nfd-dev > Subject: Re: [Nfd-dev] How to treat ".." in an NDN URI? > > Hi JeffT > > TLV spec cites RFC3986 for URI syntax. The processing of ".." doesn't need to be mentioned in TLV spec because it's inherited from RFC3986. > > RFC3986 says: > The path segments "." and "..", also known as dot-segments, are defined for relative reference within the path name hierarchy. They are intended for use at the beginning of a relative-path reference to indicate relative position within the hierarchical tree of names. > Therefore, if ".." appears within an absolute ndn URI, the entire URI is invalid and should raise an error. > > Yours, Junxiao > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Sun Aug 10 21:59:58 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sun, 10 Aug 2014 21:59:58 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> References: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> Message-ID: ndn protocol extension for Firefox needs relative URIs. The base URI used in resolution is Data Name without segment component. On Aug 10, 2014 9:56 PM, "Beichuan Zhang" wrote: > > This is complicate. In what scenarios do we need to use ?up to a level? as an NDN name component? > > Can we treat ?.? and ?..? as is without the special meaning of ?this level? and ?up to a level?? I?m also fine with treating them as illegal name components. > > Beichuan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.ARIZONA.EDU Sun Aug 10 22:10:23 2014 From: bzhang at cs.ARIZONA.EDU (Beichuan Zhang) Date: Sun, 10 Aug 2014 22:10:23 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> Message-ID: <14F8B3D1-F0D1-488A-B889-DD2610DD05DE@cs.arizona.edu> can you give an example? On Aug 10, 2014, at 9:59 PM, Junxiao Shi wrote: > ndn protocol extension for Firefox needs relative URIs. The base URI used in resolution is Data Name without segment component. > > On Aug 10, 2014 9:56 PM, "Beichuan Zhang" wrote: > > > > This is complicate. In what scenarios do we need to use ?up to a level? as an NDN name component? > > > > Can we treat ?.? and ?..? as is without the special meaning of ?this level? and ?up to a level?? I?m also fine with treating them as illegal name components. > > > > Beichuan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Aug 11 08:20:40 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 11 Aug 2014 15:20:40 +0000 Subject: [Nfd-dev] Repo-ng documentation? Message-ID: Hi, The repo-ng documentation does not have much introductory documentation (cf ccnr), at least as far as I can tell. Where should a *new* developer to NDN be pointed to see examples and read about how to start using it? Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Mon Aug 11 11:46:58 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Mon, 11 Aug 2014 11:46:58 -0700 Subject: [Nfd-dev] gerrit update Message-ID: <7F18B21B-7148-41E1-83DA-13B1FE4BD1FF@ucla.edu> Hi guys, I have update gerrit to new version (2.9). As part of this update, the default change screen has been changed to the "new UI". If you're experiencing problems with it, you can change back to the "old UI" in settings: http://gerrit.named-data.net/#/settings/preferences (Change View box). --- Alex From shijunxiao at email.arizona.edu Wed Aug 13 09:14:16 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Aug 2014 09:14:16 -0700 Subject: [Nfd-dev] Repo-ng documentation? In-Reply-To: References: Message-ID: Hi Jeff Repo protocol is listed at http://redmine.named-data.net/projects/repo-ng/wiki I'm not aware of any library support of repo protocol. This means user has to manually construct repo management commands in order to insert contents into the repo. Yours, Junxiao On Mon, Aug 11, 2014 at 8:20 AM, Burke, Jeff wrote: > > Hi, > > The repo-ng documentation does not have much introductory documentation > (cf ccnr > ), > at least as far as I can tell. Where should a *new* developer to NDN be > pointed to see examples and read about how to start using it? > > Thanks, > Jeff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Aug 13 09:20:56 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Aug 2014 09:20:56 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: <14F8B3D1-F0D1-488A-B889-DD2610DD05DE@cs.arizona.edu> References: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> <14F8B3D1-F0D1-488A-B889-DD2610DD05DE@cs.arizona.edu> Message-ID: Hi Beichuan 1. visit ndn:/example/dir1/page1.htm 2. page1 contains a link with href="../dir2/page2.htm" 3. clicking this link navigates to ndn:/example/dir2/page2.htm This procedure does not inevitably require ".." to be permitted in an absolute URI. However, RFC3986 defines the resolution of both relative URI and absolute URI to use the same procedure, therefore ".." is permitted in absolute URIs as well. Linking to relative URIs is useful in web development, because it allows an author to create a web page or web application without first knowing the absolute URI under which it would be hosted. The alternative is to write absolute URIs everywhere, which is not desirable by web developers. Yours, Junxiao On Sun, Aug 10, 2014 at 10:10 PM, Beichuan Zhang wrote: > can you give an example? > > On Aug 10, 2014, at 9:59 PM, Junxiao Shi > wrote: > > ndn protocol extension for Firefox needs relative URIs. The base URI used > in resolution is Data Name without segment component. > > On Aug 10, 2014 9:56 PM, "Beichuan Zhang" wrote: > > > > This is complicate. In what scenarios do we need to use ?up to a level? > as an NDN name component? > > > > Can we treat ?.? and ?..? as is without the special meaning of ?this > level? and ?up to a level?? I?m also fine with treating them as illegal > name components. > > > > Beichuan > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Aug 13 10:09:26 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Aug 2014 10:09:26 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" Message-ID: Dear folks I notice that "gitweb" link on several projects (such as ndn-cxx) are not working. Please resolve. Yours, Junxiao -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.ARIZONA.EDU Wed Aug 13 11:07:39 2014 From: bzhang at cs.ARIZONA.EDU (Beichuan Zhang) Date: Wed, 13 Aug 2014 11:07:39 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: References: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> <14F8B3D1-F0D1-488A-B889-DD2610DD05DE@cs.arizona.edu> Message-ID: <0B189E0E-9536-4E89-9E4D-C4AF2CFA1EAE@cs.arizona.edu> It makes sense from the app/browser point of view. My concern was about the forwarder. I don?t think NFD should support all the variations of these relative paths. Beichuan On Aug 13, 2014, at 9:20 AM, Junxiao Shi wrote: > Hi Beichuan > > visit ndn:/example/dir1/page1.htm > page1 contains a link with href="../dir2/page2.htm" > clicking this link navigates to ndn:/example/dir2/page2.htm > This procedure does not inevitably require ".." to be permitted in an absolute URI. However, RFC3986 defines the resolution of both relative URI and absolute URI to use the same procedure, therefore ".." is permitted in absolute URIs as well. > > > Linking to relative URIs is useful in web development, because it allows an author to create a web page or web application without first knowing the absolute URI under which it would be hosted. > The alternative is to write absolute URIs everywhere, which is not desirable by web developers. > > Yours, Junxiao > > On Sun, Aug 10, 2014 at 10:10 PM, Beichuan Zhang wrote: > can you give an example? > > On Aug 10, 2014, at 9:59 PM, Junxiao Shi wrote: > >> ndn protocol extension for Firefox needs relative URIs. The base URI used in resolution is Data Name without segment component. >> >> On Aug 10, 2014 9:56 PM, "Beichuan Zhang" wrote: >> > >> > This is complicate. In what scenarios do we need to use ?up to a level? as an NDN name component? >> > >> > Can we treat ?.? and ?..? as is without the special meaning of ?this level? and ?up to a level?? I?m also fine with treating them as illegal name components. >> > >> > Beichuan >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Wed Aug 13 11:16:22 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 13 Aug 2014 11:16:22 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" In-Reply-To: References: Message-ID: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> Junxiao, Can you be more specific of what exactly not working and on which exactly page? gitweb works for ndn-cxx and other repos from what I can see. --- Alex On Aug 13, 2014, at 10:09 AM, Junxiao Shi wrote: > Dear folks > > I notice that "gitweb" link on several projects (such as ndn-cxx) are not working. > Please resolve. > > Yours, Junxiao From alexander.afanasyev at ucla.edu Wed Aug 13 11:18:46 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Wed, 13 Aug 2014 11:18:46 -0700 Subject: [Nfd-dev] How to treat ".." in an NDN URI? In-Reply-To: <0B189E0E-9536-4E89-9E4D-C4AF2CFA1EAE@cs.arizona.edu> References: <172112D9-580F-4B24-BCFC-02EE4EE34DAE@cs.arizona.edu> <14F8B3D1-F0D1-488A-B889-DD2610DD05DE@cs.arizona.edu> <0B189E0E-9536-4E89-9E4D-C4AF2CFA1EAE@cs.arizona.edu> Message-ID: As far as I can understand, what Junxiao is saying applies only to URI representation and is never exposed in wire TLV representation, with which forwarder is working on. My opinion is that this feature should be implemented only when somebody actually needs it. Until then, there is no point of spending time on it. --- Alex On Aug 13, 2014, at 11:07 AM, Beichuan Zhang wrote: > It makes sense from the app/browser point of view. My concern was about the forwarder. I don?t think NFD should support all the variations of these relative paths. > > Beichuan > > On Aug 13, 2014, at 9:20 AM, Junxiao Shi wrote: > >> Hi Beichuan >> >> visit ndn:/example/dir1/page1.htm >> page1 contains a link with href="../dir2/page2.htm" >> clicking this link navigates to ndn:/example/dir2/page2.htm >> This procedure does not inevitably require ".." to be permitted in an absolute URI. However, RFC3986 defines the resolution of both relative URI and absolute URI to use the same procedure, therefore ".." is permitted in absolute URIs as well. >> >> >> Linking to relative URIs is useful in web development, because it allows an author to create a web page or web application without first knowing the absolute URI under which it would be hosted. >> The alternative is to write absolute URIs everywhere, which is not desirable by web developers. >> >> Yours, Junxiao >> >> On Sun, Aug 10, 2014 at 10:10 PM, Beichuan Zhang wrote: >> can you give an example? >> >> On Aug 10, 2014, at 9:59 PM, Junxiao Shi wrote: >> >>> ndn protocol extension for Firefox needs relative URIs. The base URI used in resolution is Data Name without segment component. >>> >>> On Aug 10, 2014 9:56 PM, "Beichuan Zhang" wrote: >>> > >>> > This is complicate. In what scenarios do we need to use ?up to a level? as an NDN name component? >>> > >>> > Can we treat ?.? and ?..? as is without the special meaning of ?this level? and ?up to a level?? I?m also fine with treating them as illegal name components. >>> > >>> > Beichuan >>> > >>> >> >> > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Aug 13 11:34:39 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Aug 2014 11:34:39 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" In-Reply-To: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> References: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> Message-ID: Steps to reproduce: 1. visit http://gerrit.named-data.net/#/c/1112/6 2. click "(gitweb)" next to commit hash ? Expected: the commit shows up, along with the repository tree Actual: 404 Not Found for URI http://gerrit.named-data.net/gitweb?p=ndn-cxx.git;a=commit;h=4abdbf130483f5b362895b8a62c92b813a449e3d -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture.PNG Type: image/png Size: 34061 bytes Desc: not available URL: From alexander.afanasyev at UCLA.EDU Wed Aug 13 11:46:26 2014 From: alexander.afanasyev at UCLA.EDU (Alex Afanasyev) Date: Wed, 13 Aug 2014 11:46:26 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" In-Reply-To: References: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> Message-ID: <6DB6BA6F-D65C-4C9D-AD23-D1C3290004D0@ucla.edu> Hmm.. The links work for me... But you have to be logged in. --- Alex On Aug 13, 2014, at 11:34 AM, Junxiao Shi wrote: > Steps to reproduce: > visit http://gerrit.named-data.net/#/c/1112/6 > click "(gitweb)" next to commit hash > > ? > Expected: the commit shows up, along with the repository tree > Actual: 404 Not Found for URI http://gerrit.named-data.net/gitweb?p=ndn-cxx.git;a=commit;h=4abdbf130483f5b362895b8a62c92b813a449e3d -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Aug 13 14:07:19 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 13 Aug 2014 14:07:19 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" In-Reply-To: <6DB6BA6F-D65C-4C9D-AD23-D1C3290004D0@ucla.edu> References: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> <6DB6BA6F-D65C-4C9D-AD23-D1C3290004D0@ucla.edu> Message-ID: Hi Alex It's working now. Long ago I have changed login cookie expiration time to a few months so I don't have to sign in each time. The problem is resolved by itself without me changing any cookie setting. Yours, Junxiao On Wed, Aug 13, 2014 at 11:46 AM, Alex Afanasyev < alexander.afanasyev at ucla.edu> wrote: > Hmm.. > > The links work for me... But you have to be logged in. > > --- > Alex > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at UCLA.EDU Wed Aug 13 14:08:44 2014 From: alexander.afanasyev at UCLA.EDU (Alex Afanasyev) Date: Wed, 13 Aug 2014 14:08:44 -0700 Subject: [Nfd-dev] Gerrit: gitweb "not found" In-Reply-To: References: <9B9E16C7-CFE5-4D8D-8D78-B04105FA7F34@ucla.edu> <6DB6BA6F-D65C-4C9D-AD23-D1C3290004D0@ucla.edu> Message-ID: <3FE3E410-400B-40F0-AE47-BB8141F5BF85@ucla.edu> I also changed settings so gitweb is available for all users. --- Alex On Aug 13, 2014, at 2:07 PM, Junxiao Shi wrote: > Hi Alex > > It's working now. > > Long ago I have changed login cookie expiration time to a few months so I don't have to sign in each time. > The problem is resolved by itself without me changing any cookie setting. > > Yours, Junxiao > > On Wed, Aug 13, 2014 at 11:46 AM, Alex Afanasyev wrote: > Hmm.. > > The links work for me... But you have to be logged in. > > --- > Alex > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Aug 14 18:50:01 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 14 Aug 2014 18:50:01 -0700 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: <53ED0F59.9070903@seas.wustl.edu> References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> Message-ID: Hi John, I have proceed with your suggestion and new package is on its way to be available soon. I did basic tests, but could have screwed up again with upstart scripts, so be cautious :) Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, since only spurs is home for guest users. All other nodes should just have their site's prefix as part of on-demand. --- Alex On Aug 14, 2014, at 12:34 PM, John DeHart wrote: > > Alex and Junxiao, > > Is it possible to expand nfd-autoreg to handle two sets of prefixes, one for on-demand faces and one for all faces? > Right now in the config file for autoreg config file we have > # Prefixes to register > PREFIXES="/ndn/guest /ndn/broadcast /ndn/edu/memphis" > > What if we had > # Prefixes to register on All faces: > ALL_FACES_PREFIXES="/ndn/guest /ndn/broadcast" > > # Prefixes to register just on on-demand faces: > ON_DEMAND_FACES_PREFIXES="/ndn/edu/memphis" > > on-demand faces would get both sets. > Non-on-demand faces would get just ALL_FACES_PREFIXES. > > John > > > On 8/14/14, 1:10 PM, Alex Afanasyev wrote: >> I have proposed that before, but will repeat. I don't think that broadcast prefixes (and setting the strategy) is what NLSR should do. At most, this should be some other routing protocol. What we could do instead is to write a tiny little daemon like nfd-autoreg, which job would be just to register /ndn/broadcast for every created face (not just on-demand ones). May be we can ignore application faces, though it doesn't matter for NDN testbed much. >> >> --- >> Alex >> >> On Aug 14, 2014, at 9:55 AM, Junxiao Shi wrote: >> >>> Hi John >>> >>> fib.max-faces-per-prefix is the maximum number of Routes with same Name prefix that NLSR will install to the RIB. >>> The parameter starts with "fib." due to historical reason; it concerns what NLSR would install to the RIB, not FIB. >>> >>> I suggest setting this parameter to 60, so that expansions in the near future are also covered. >>> When there are no more than 5 backbone links, setting it to 5 or 60 has same effect. >>> >>> This short term solution would stop working when a HUB has more than 60 backbone links, but this day is unlikely to come within one year. >>> >>> Yours, Junxiao >>> >>> >>> On Thu, Aug 14, 2014 at 9:46 AM, John DeHart wrote: >>> >>> Junxiao, >>> >>> Ahh. I was thinking of the fib.max-faces-per-prefix as a limit on all faces not just >>> ones that NLSR is concerned with. >>> >>> In the testbed right now we have 5 nodes that >>> each have 5 links to other nodes. Those are the most connected nodes. >>> So, if we set the fib.max-faces-per-prefix for NLSR to >= 5 we should be ok. >>> Right? >>> >>> John >>> >>> _______________________________________________ >>> Operators mailing list >>> Operators at lists.named-data.net >>> http://lists.named-data.net/mailman/listinfo/operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Aug 14 18:53:02 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 14 Aug 2014 18:53:02 -0700 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> Message-ID: Dear folks If UCLA serves as the home router for /ndn/guest, it should announce this namespace in NLSR, so that clients connected on other routers can reach this prefix. Yours, Junxiao On Aug 14, 2014 6:50 PM, "Alex Afanasyev" wrote: > > Hi John, > > I have proceed with your suggestion and new package is on its way to be available soon. I did basic tests, but could have screwed up again with upstart scripts, so be cautious :) > > Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast > > /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, since only spurs is home for guest users. All other nodes should just have their site's prefix as part of on-demand. > > --- > Alex > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Aug 14 18:54:37 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 14 Aug 2014 18:54:37 -0700 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> Message-ID: On Aug 14, 2014, at 6:53 PM, Junxiao Shi wrote: > Dear folks > > If UCLA serves as the home router for /ndn/guest, it should announce this namespace in NLSR, so that clients connected on other routers can reach this prefix. > I have already made this change earlier today. --- Alex > Yours, Junxiao > On Aug 14, 2014 6:50 PM, "Alex Afanasyev" wrote: > > > > Hi John, > > > > I have proceed with your suggestion and new package is on its way to be available soon. I did basic tests, but could have screwed up again with upstart scripts, so be cautious :) > > > > Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast > > > > /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, since only spurs is home for guest users. All other nodes should just have their site's prefix as part of on-demand. > > > > --- > > Alex > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at seas.wustl.edu Thu Aug 14 19:25:12 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Thu, 14 Aug 2014 21:25:12 -0500 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> Message-ID: <53ED6F88.3070600@seas.wustl.edu> Alex, Is this new package ready to go? Looks like you have it installed on UCLA. Should I update the rest? John On 8/14/14, 8:50 PM, Alex Afanasyev wrote: > Hi John, > > I have proceed with your suggestion and new package is on its way to > be available soon. I did basic tests, but could have screwed up > again with upstart scripts, so be cautious :) > > Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast > > /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, > since only spurs is home for guest users. All other nodes should just > have their site's prefix as part of on-demand. > > --- > Alex > > On Aug 14, 2014, at 12:34 PM, John DeHart > wrote: > >> >> Alex and Junxiao, >> >> Is it possible to expand nfd-autoreg to handle two sets of prefixes, >> one for on-demand faces and one for all faces? >> Right now in the config file for autoreg config file we have >> # Prefixes to register >> PREFIXES="/ndn/guest /ndn/broadcast /ndn/edu/memphis" >> >> What if we had >> # Prefixes to register on All faces: >> ALL_FACES_PREFIXES="/ndn/guest /ndn/broadcast" >> >> # Prefixes to register just on on-demand faces: >> ON_DEMAND_FACES_PREFIXES="/ndn/edu/memphis" >> >> on-demand faces would get both sets. >> Non-on-demand faces would get just ALL_FACES_PREFIXES. >> >> John >> >> >> On 8/14/14, 1:10 PM, Alex Afanasyev wrote: >>> I have proposed that before, but will repeat. I don't think that >>> broadcast prefixes (and setting the strategy) is what NLSR should >>> do. At most, this should be some other routing protocol. What we >>> could do instead is to write a tiny little daemon like nfd-autoreg, >>> which job would be just to register /ndn/broadcast for every created >>> face (not just on-demand ones). May be we can ignore application >>> faces, though it doesn't matter for NDN testbed much. >>> >>> --- >>> Alex >>> >>> On Aug 14, 2014, at 9:55 AM, Junxiao Shi >>> > >>> wrote: >>> >>>> Hi John >>>> >>>> fib.max-faces-per-prefix is the maximum number of Routes with same >>>> Name prefix that NLSR will install to the RIB. >>>> The parameter starts with "fib." due to historical reason; it >>>> concerns what NLSR would install to the RIB, not FIB. >>>> >>>> I suggest setting this parameter to 60, so that expansions in the >>>> near future are also covered. >>>> When there are no more than 5 backbone links, setting it to 5 or 60 >>>> has same effect. >>>> >>>> This short term solution would stop working when a HUB has more >>>> than 60 backbone links, but this day is unlikely to come within one >>>> year. >>>> >>>> Yours, Junxiao >>>> >>>> >>>> On Thu, Aug 14, 2014 at 9:46 AM, John DeHart >>> > wrote: >>>> >>>> >>>> Junxiao, >>>> >>>> Ahh. I was thinking of the fib.max-faces-per-prefix as a limit >>>> on all faces not just >>>> ones that NLSR is concerned with. >>>> >>>> In the testbed right now we have 5 nodes that >>>> each have 5 links to other nodes. Those are the most connected >>>> nodes. >>>> So, if we set the fib.max-faces-per-prefix for NLSR to >= 5 we >>>> should be ok. >>>> Right? >>>> >>>> John >>>> >>>> _______________________________________________ >>>> Operators mailing list >>>> Operators at lists.named-data.net >>>> http://lists.named-data.net/mailman/listinfo/operators >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Aug 14 19:35:19 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 14 Aug 2014 19:35:19 -0700 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: <53ED6F88.3070600@seas.wustl.edu> References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> <53ED6F88.3070600@seas.wustl.edu> Message-ID: <2268BD02-6462-4665-9C8F-7510D19FCC46@ucla.edu> Yes. I installed on spurs (to test) just after it finished building. You can install on other nodes. One quick question. Did you install nfd-autoreg and nfd-status-http-server packages or somehow manually made upstart scripts (they were not installed on spurs... not sure why). --- Alex On Aug 14, 2014, at 7:25 PM, John DeHart wrote: > > Alex, > > Is this new package ready to go? Looks like you have it installed on UCLA. > Should I update the rest? > > John > > On 8/14/14, 8:50 PM, Alex Afanasyev wrote: >> Hi John, >> >> I have proceed with your suggestion and new package is on its way to be available soon. I did basic tests, but could have screwed up again with upstart scripts, so be cautious :) >> >> Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast >> >> /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, since only spurs is home for guest users. All other nodes should just have their site's prefix as part of on-demand. >> >> --- >> Alex >> >> On Aug 14, 2014, at 12:34 PM, John DeHart wrote: >> >>> >>> Alex and Junxiao, >>> >>> Is it possible to expand nfd-autoreg to handle two sets of prefixes, one for on-demand faces and one for all faces? >>> Right now in the config file for autoreg config file we have >>> # Prefixes to register >>> PREFIXES="/ndn/guest /ndn/broadcast /ndn/edu/memphis" >>> >>> What if we had >>> # Prefixes to register on All faces: >>> ALL_FACES_PREFIXES="/ndn/guest /ndn/broadcast" >>> >>> # Prefixes to register just on on-demand faces: >>> ON_DEMAND_FACES_PREFIXES="/ndn/edu/memphis" >>> >>> on-demand faces would get both sets. >>> Non-on-demand faces would get just ALL_FACES_PREFIXES. >>> >>> John >>> >>> >>> On 8/14/14, 1:10 PM, Alex Afanasyev wrote: >>>> I have proposed that before, but will repeat. I don't think that broadcast prefixes (and setting the strategy) is what NLSR should do. At most, this should be some other routing protocol. What we could do instead is to write a tiny little daemon like nfd-autoreg, which job would be just to register /ndn/broadcast for every created face (not just on-demand ones). May be we can ignore application faces, though it doesn't matter for NDN testbed much. >>>> >>>> --- >>>> Alex >>>> >>>> On Aug 14, 2014, at 9:55 AM, Junxiao Shi wrote: >>>> >>>>> Hi John >>>>> >>>>> fib.max-faces-per-prefix is the maximum number of Routes with same Name prefix that NLSR will install to the RIB. >>>>> The parameter starts with "fib." due to historical reason; it concerns what NLSR would install to the RIB, not FIB. >>>>> >>>>> I suggest setting this parameter to 60, so that expansions in the near future are also covered. >>>>> When there are no more than 5 backbone links, setting it to 5 or 60 has same effect. >>>>> >>>>> This short term solution would stop working when a HUB has more than 60 backbone links, but this day is unlikely to come within one year. >>>>> >>>>> Yours, Junxiao >>>>> >>>>> >>>>> On Thu, Aug 14, 2014 at 9:46 AM, John DeHart wrote: >>>>> >>>>> Junxiao, >>>>> >>>>> Ahh. I was thinking of the fib.max-faces-per-prefix as a limit on all faces not just >>>>> ones that NLSR is concerned with. >>>>> >>>>> In the testbed right now we have 5 nodes that >>>>> each have 5 links to other nodes. Those are the most connected nodes. >>>>> So, if we set the fib.max-faces-per-prefix for NLSR to >= 5 we should be ok. >>>>> Right? >>>>> >>>>> John >>>>> >>>>> _______________________________________________ >>>>> Operators mailing list >>>>> Operators at lists.named-data.net >>>>> http://lists.named-data.net/mailman/listinfo/operators >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdd at seas.wustl.edu Thu Aug 14 19:45:51 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Thu, 14 Aug 2014 21:45:51 -0500 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: <2268BD02-6462-4665-9C8F-7510D19FCC46@ucla.edu> References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> <53ED6F88.3070600@seas.wustl.edu> <2268BD02-6462-4665-9C8F-7510D19FCC46@ucla.edu> Message-ID: <53ED745F.2050406@seas.wustl.edu> I'll do the other nodes now. I did not install separate packages for nfd-autoreg and nfd-status-http-server, I assumed they were part of the nfd package. I think the binaries were installed with the nfd package. I installed the upstart scripts for them that I got from somewhere, probably from spurs. John On 8/14/14, 9:35 PM, Alex Afanasyev wrote: > Yes. I installed on spurs (to test) just after it finished building. > You can install on other nodes. > > One quick question. Did you install nfd-autoreg and > nfd-status-http-server packages or somehow manually made upstart > scripts (they were not installed on spurs... not sure why). > > --- > Alex > > On Aug 14, 2014, at 7:25 PM, John DeHart > wrote: > >> >> Alex, >> >> Is this new package ready to go? Looks like you have it installed on >> UCLA. >> Should I update the rest? >> >> John >> >> On 8/14/14, 8:50 PM, Alex Afanasyev wrote: >>> Hi John, >>> >>> I have proceed with your suggestion and new package is on its way to >>> be available soon. I did basic tests, but could have screwed up >>> again with upstart scripts, so be cautious :) >>> >>> Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast >>> >>> /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, >>> since only spurs is home for guest users. All other nodes should >>> just have their site's prefix as part of on-demand. >>> >>> --- >>> Alex >>> >>> On Aug 14, 2014, at 12:34 PM, John DeHart >> > wrote: >>> >>>> >>>> Alex and Junxiao, >>>> >>>> Is it possible to expand nfd-autoreg to handle two sets of >>>> prefixes, one for on-demand faces and one for all faces? >>>> Right now in the config file for autoreg config file we have >>>> # Prefixes to register >>>> PREFIXES="/ndn/guest /ndn/broadcast /ndn/edu/memphis" >>>> >>>> What if we had >>>> # Prefixes to register on All faces: >>>> ALL_FACES_PREFIXES="/ndn/guest /ndn/broadcast" >>>> >>>> # Prefixes to register just on on-demand faces: >>>> ON_DEMAND_FACES_PREFIXES="/ndn/edu/memphis" >>>> >>>> on-demand faces would get both sets. >>>> Non-on-demand faces would get just ALL_FACES_PREFIXES. >>>> >>>> John >>>> >>>> >>>> On 8/14/14, 1:10 PM, Alex Afanasyev wrote: >>>>> I have proposed that before, but will repeat. I don't think that >>>>> broadcast prefixes (and setting the strategy) is what NLSR should >>>>> do. At most, this should be some other routing protocol. What >>>>> we could do instead is to write a tiny little daemon like >>>>> nfd-autoreg, which job would be just to register /ndn/broadcast >>>>> for every created face (not just on-demand ones). May be we can >>>>> ignore application faces, though it doesn't matter for NDN testbed >>>>> much. >>>>> >>>>> --- >>>>> Alex >>>>> >>>>> On Aug 14, 2014, at 9:55 AM, Junxiao Shi >>>>> >>>> > wrote: >>>>> >>>>>> Hi John >>>>>> >>>>>> fib.max-faces-per-prefix is the maximum number of Routes with >>>>>> same Name prefix that NLSR will install to the RIB. >>>>>> The parameter starts with "fib." due to historical reason; it >>>>>> concerns what NLSR would install to the RIB, not FIB. >>>>>> >>>>>> I suggest setting this parameter to 60, so that expansions in the >>>>>> near future are also covered. >>>>>> When there are no more than 5 backbone links, setting it to 5 or >>>>>> 60 has same effect. >>>>>> >>>>>> This short term solution would stop working when a HUB has more >>>>>> than 60 backbone links, but this day is unlikely to come within >>>>>> one year. >>>>>> >>>>>> Yours, Junxiao >>>>>> >>>>>> >>>>>> On Thu, Aug 14, 2014 at 9:46 AM, John DeHart >>>>> > wrote: >>>>>> >>>>>> >>>>>> Junxiao, >>>>>> >>>>>> Ahh. I was thinking of the fib.max-faces-per-prefix as a >>>>>> limit on all faces not just >>>>>> ones that NLSR is concerned with. >>>>>> >>>>>> In the testbed right now we have 5 nodes that >>>>>> each have 5 links to other nodes. Those are the most >>>>>> connected nodes. >>>>>> So, if we set the fib.max-faces-per-prefix for NLSR to >= 5 >>>>>> we should be ok. >>>>>> Right? >>>>>> >>>>>> John >>>>>> >>>>>> _______________________________________________ >>>>>> Operators mailing list >>>>>> Operators at lists.named-data.net >>>>>> >>>>>> http://lists.named-data.net/mailman/listinfo/operators >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Thu Aug 14 19:54:35 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Thu, 14 Aug 2014 19:54:35 -0700 Subject: [Nfd-dev] [Operators] reachability of broadcast and guest prefixes In-Reply-To: <53ED745F.2050406@seas.wustl.edu> References: <10B77F93-AE14-49D5-BE34-46169A4BCC9A@memphis.edu> <53ECE09C.4030705@seas.wustl.edu> <53ECE7D3.9040305@seas.wustl.edu> <53ED0F59.9070903@seas.wustl.edu> <53ED6F88.3070600@seas.wustl.edu> <2268BD02-6462-4665-9C8F-7510D19FCC46@ucla.edu> <53ED745F.2050406@seas.wustl.edu> Message-ID: <223AEA41-843E-4EF8-841B-611B50EB9FC9@ucla.edu> On Aug 14, 2014, at 7:45 PM, John DeHart wrote: > > I'll do the other nodes now. > > I did not install separate packages for nfd-autoreg and nfd-status-http-server, I assumed they were part > of the nfd package. I think the binaries were installed with the nfd package. > I installed the upstart scripts for them that I got from somewhere, > probably from spurs. Got it. Just install these two new packages then, it will be easier to manage. The only thing they contain are upstart scripts and file for /etc/default/. --- Alex > > John > > On 8/14/14, 9:35 PM, Alex Afanasyev wrote: >> Yes. I installed on spurs (to test) just after it finished building. You can install on other nodes. >> >> One quick question. Did you install nfd-autoreg and nfd-status-http-server packages or somehow manually made upstart scripts (they were not installed on spurs... not sure why). >> >> --- >> Alex >> >> On Aug 14, 2014, at 7:25 PM, John DeHart wrote: >> >>> >>> Alex, >>> >>> Is this new package ready to go? Looks like you have it installed on UCLA. >>> Should I update the rest? >>> >>> John >>> >>> On 8/14/14, 8:50 PM, Alex Afanasyev wrote: >>>> Hi John, >>>> >>>> I have proceed with your suggestion and new package is on its way to be available soon. I did basic tests, but could have screwed up again with upstart scripts, so be cautious :) >>>> >>>> Small note. ALL_FACES_PREFIXES should be only for /ndn/broadcast >>>> >>>> /ndn/guest should be part of ON_DEMAND_FACES_PREFIXES only on spurs, since only spurs is home for guest users. All other nodes should just have their site's prefix as part of on-demand. >>>> >>>> --- >>>> Alex >>>> >>>> On Aug 14, 2014, at 12:34 PM, John DeHart wrote: >>>> >>>>> >>>>> Alex and Junxiao, >>>>> >>>>> Is it possible to expand nfd-autoreg to handle two sets of prefixes, one for on-demand faces and one for all faces? >>>>> Right now in the config file for autoreg config file we have >>>>> # Prefixes to register >>>>> PREFIXES="/ndn/guest /ndn/broadcast /ndn/edu/memphis" >>>>> >>>>> What if we had >>>>> # Prefixes to register on All faces: >>>>> ALL_FACES_PREFIXES="/ndn/guest /ndn/broadcast" >>>>> >>>>> # Prefixes to register just on on-demand faces: >>>>> ON_DEMAND_FACES_PREFIXES="/ndn/edu/memphis" >>>>> >>>>> on-demand faces would get both sets. >>>>> Non-on-demand faces would get just ALL_FACES_PREFIXES. >>>>> >>>>> John >>>>> >>>>> >>>>> On 8/14/14, 1:10 PM, Alex Afanasyev wrote: >>>>>> I have proposed that before, but will repeat. I don't think that broadcast prefixes (and setting the strategy) is what NLSR should do. At most, this should be some other routing protocol. What we could do instead is to write a tiny little daemon like nfd-autoreg, which job would be just to register /ndn/broadcast for every created face (not just on-demand ones). May be we can ignore application faces, though it doesn't matter for NDN testbed much. >>>>>> >>>>>> --- >>>>>> Alex >>>>>> >>>>>> On Aug 14, 2014, at 9:55 AM, Junxiao Shi wrote: >>>>>> >>>>>>> Hi John >>>>>>> >>>>>>> fib.max-faces-per-prefix is the maximum number of Routes with same Name prefix that NLSR will install to the RIB. >>>>>>> The parameter starts with "fib." due to historical reason; it concerns what NLSR would install to the RIB, not FIB. >>>>>>> >>>>>>> I suggest setting this parameter to 60, so that expansions in the near future are also covered. >>>>>>> When there are no more than 5 backbone links, setting it to 5 or 60 has same effect. >>>>>>> >>>>>>> This short term solution would stop working when a HUB has more than 60 backbone links, but this day is unlikely to come within one year. >>>>>>> >>>>>>> Yours, Junxiao >>>>>>> >>>>>>> >>>>>>> On Thu, Aug 14, 2014 at 9:46 AM, John DeHart wrote: >>>>>>> >>>>>>> Junxiao, >>>>>>> >>>>>>> Ahh. I was thinking of the fib.max-faces-per-prefix as a limit on all faces not just >>>>>>> ones that NLSR is concerned with. >>>>>>> >>>>>>> In the testbed right now we have 5 nodes that >>>>>>> each have 5 links to other nodes. Those are the most connected nodes. >>>>>>> So, if we set the fib.max-faces-per-prefix for NLSR to >= 5 we should be ok. >>>>>>> Right? >>>>>>> >>>>>>> John >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Operators mailing list >>>>>>> Operators at lists.named-data.net >>>>>>> http://lists.named-data.net/mailman/listinfo/operators >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Fri Aug 15 16:35:29 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 15 Aug 2014 16:35:29 -0700 Subject: [Nfd-dev] nlsr observations Message-ID: <0856370D-65BC-41BC-BA48-2B9EDBAA9B19@ucla.edu> Just a few observations I made from log files. After I restarted NLSR, I started to get bunch of "Face not found" errors, which seem to be non-recoverable. Not entirely sure what exactly is happening. --- 0140815161151753 HelloProtocol 88: Interest Received for Name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%25%08%03ndn%08%03edu%08%04ucla%08%08%C1.Router%08%02cs%08%05aleph 20140815161151753 HelloProtocol 94: Neighbor: /ndn/edu/ucla/%C1.Router/cs/aleph 20140815161151758 HelloProtocol 102: Sending out data for name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%25%08%03ndn%08%03edu%08%04ucla%08%08%C1.Router%08%02cs%08%05aleph 20140815161152544 HelloProtocol 88: Interest Received for Name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%22%08%03ndn%08%03org%08%05caida%08%08%C1.Router%08%05click 20140815161152544 HelloProtocol 94: Neighbor: /ndn/org/caida/%C1.Router/click 20140815161152549 HelloProtocol 102: Sending out data for name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%22%08%03ndn%08%03org%08%05caida%08%08%C1.Router%08%05click 20140815161152658 HelloProtocol 264: Face not found (code: 410) 20140815161152767 HelloProtocol 39: Expressing Interest :/ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub 20140815161152767 HelloProtocol 39: Expressing Interest :/ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub 20140815161152791 HelloProtocol 167: Received data for INFO(name): /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%C8 20140815161152791 HelloProtocol 170: Data signed with: /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/KEY/ksk-1408141278958/ID-CERT 20140815161152791 HelloProtocol 184: Data validation successful for INFO(name): /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%C8 20140815161152791 HelloProtocol 191: Neighbor : /ndn/edu/ucla/%C1.Router/cs/spurs 20140815161152791 HelloProtocol 192: Old Status: 1 New Status: 1 20140815161152792 HelloProtocol 167: Received data for INFO(name): /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%CC 20140815161152792 HelloProtocol 170: Data signed with: /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/KEY/ksk-1408144241516/ID-CERT 20140815161152792 HelloProtocol 184: Data validation successful for INFO(name): /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%CC 20140815161152792 HelloProtocol 191: Neighbor : /ndn/edu/ucla/%C1.Router/cs/aleph 20140815161152792 HelloProtocol 192: Old Status: 1 New Status: 1 20140815161152876 HelloProtocol 264: Face not found (code: 410) ... From alexander.afanasyev at ucla.edu Fri Aug 15 16:48:12 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Fri, 15 Aug 2014 16:48:12 -0700 Subject: [Nfd-dev] nlsr observations In-Reply-To: <0856370D-65BC-41BC-BA48-2B9EDBAA9B19@ucla.edu> References: <0856370D-65BC-41BC-BA48-2B9EDBAA9B19@ucla.edu> Message-ID: <4A964659-B390-4A03-8662-94B0CBE4A0B8@ucla.edu> Follow up. I think NLSR does not work properly if it is restarted without restarting the daemon. I suspect something related to cache or face notifications. At least these are my observations. If I restart NLSR (sudo restart nlsr), I'm getting these "Face not found" errors. If I restart nfd and nlsr at the same time, things seem to start working. --- Alex On Aug 15, 2014, at 4:35 PM, Alex Afanasyev wrote: > Just a few observations I made from log files. After I restarted NLSR, I started to get bunch of "Face not found" errors, which seem to be non-recoverable. Not entirely sure what exactly is happening. > > --- > > 0140815161151753 HelloProtocol 88: Interest Received for Name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%25%08%03ndn%08%03edu%08%04ucla%08%08%C1.Router%08%02cs%08%05aleph > 20140815161151753 HelloProtocol 94: Neighbor: /ndn/edu/ucla/%C1.Router/cs/aleph > 20140815161151758 HelloProtocol 102: Sending out data for name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%25%08%03ndn%08%03edu%08%04ucla%08%08%C1.Router%08%02cs%08%05aleph > 20140815161152544 HelloProtocol 88: Interest Received for Name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%22%08%03ndn%08%03org%08%05caida%08%08%C1.Router%08%05click > 20140815161152544 HelloProtocol 94: Neighbor: /ndn/org/caida/%C1.Router/click > 20140815161152549 HelloProtocol 102: Sending out data for name: /ndn/edu/uci/%C1.Router/ndnhub/NLSR/INFO/%07%22%08%03ndn%08%03org%08%05caida%08%08%C1.Router%08%05click > 20140815161152658 HelloProtocol 264: Face not found (code: 410) > 20140815161152767 HelloProtocol 39: Expressing Interest :/ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub > 20140815161152767 HelloProtocol 39: Expressing Interest :/ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub > 20140815161152791 HelloProtocol 167: Received data for INFO(name): /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%C8 > 20140815161152791 HelloProtocol 170: Data signed with: /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/KEY/ksk-1408141278958/ID-CERT > 20140815161152791 HelloProtocol 184: Data validation successful for INFO(name): /ndn/edu/ucla/%C1.Router/cs/spurs/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%C8 > 20140815161152791 HelloProtocol 191: Neighbor : /ndn/edu/ucla/%C1.Router/cs/spurs > 20140815161152791 HelloProtocol 192: Old Status: 1 New Status: 1 > 20140815161152792 HelloProtocol 167: Received data for INFO(name): /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%CC > 20140815161152792 HelloProtocol 170: Data signed with: /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/KEY/ksk-1408144241516/ID-CERT > 20140815161152792 HelloProtocol 184: Data validation successful for INFO(name): /ndn/edu/ucla/%C1.Router/cs/aleph/NLSR/INFO/%07%21%08%03ndn%08%03edu%08%03uci%08%08%C1.Router%08%06ndnhub/%00%00%01G%DB%F1%09%CC > 20140815161152792 HelloProtocol 191: Neighbor : /ndn/edu/ucla/%C1.Router/cs/aleph > 20140815161152792 HelloProtocol 192: Old Status: 1 New Status: 1 > 20140815161152876 HelloProtocol 264: Face not found (code: 410) > > ... From bzhang at cs.arizona.edu Sat Aug 16 08:03:13 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Sat, 16 Aug 2014 08:03:13 -0700 Subject: [Nfd-dev] monitoring data plance Message-ID: <59E82908-E8CC-4246-84C5-5E38B9EBD0DF@cs.arizona.edu> Hi John, The current testbed status page monitors the control plane, i.e., prefixes in the routing table. I think we also need monitor data plane reachability e.g., measured by ndnping. More specifically, - Each node runs a script that reads the list of prefixes from a local file, and ndnping each prefix for a number of times, say 10. The output is published as an HTML or XML page under the current web server on the node. The measurement is done periodically. - The stat collection script at WashU retrieves the page from individual node, parse them, and the result is another table to the current status page. This table has the same layout as the current table, but in each cell it shows the % of ping packets received in the most recent measurement. This information would be helpful once apps are deployed on the testbed. What do you think? Beichuan From shijunxiao at email.arizona.edu Sat Aug 16 09:25:00 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Sat, 16 Aug 2014 09:25:00 -0700 Subject: [Nfd-dev] [Operators] monitoring data plane Message-ID: Dear folks An alternate is: write a program to connect to each HUB via ndn-cxx's TcpTransport, and send ping Interest over there. Benefit: This program needs to run on only one machine. No deployment on HUBs. Drawback: RTT includes the delay between the laptop running this program and the HUB - but it's sufficient to test reachability. Please check out the reachability page for my ndn6.tk node: http://yoursunny.com/p/ndn6/reachability/ It uses WebSocket connections to every router. It works on recent version of Chrome, including desktop and Android editions. "NDN6" and "UCLA/v6" will appear red if your laptop doesn't have IPv6. Yours, Junxiao On Sat, Aug 16, 2014 at 8:03 AM, Beichuan Zhang wrote: > Hi John, > > The current testbed status page monitors the control plane, i.e., prefixes > in the routing table. I think we also need monitor data plane reachability > e.g., measured by ndnping. > > More specifically, > > - Each node runs a script that reads the list of prefixes from a local > file, and ndnping each prefix for a number of times, say 10. The output is > published as an HTML or XML page under the current web server on the node. > The measurement is done periodically. > > - The stat collection script at WashU retrieves the page from individual > node, parse them, and the result is another table to the current status > page. This table has the same layout as the current table, but in each cell > it shows the % of ping packets received in the most recent measurement. > > This information would be helpful once apps are deployed on the testbed. > What do you think? > > Beichuan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Aug 18 08:51:26 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 18 Aug 2014 15:51:26 +0000 Subject: [Nfd-dev] NFD documentation comments Message-ID: Hi NFD team, Yesterday I tried to methodically follow the directions for installing NFD on a few machines, and wanted to share some comments on the documentation. I have installed it in the past, but tried to forget what I remembered. :) Please take/ignore as you will. I hope the comments are helpful. cheers, Jeff --- Once I figured out where to look, the docs are pretty easy to figure out and things build/install smoothly. Congrats! The main confusing thing is that there are several candidate starting points (all of which I ended up needing), and replication of documentation on the Wiki, github source distribution, website. - I'd suggest not duplicating pages on the Wiki and the Web. It leads to confusion and inconsistencies. For example, the Getting Started with NFD has this circular reference: On the web: "A more recent version of this document may be available on NFD Wiki." On the wiki: "A more recent version of this document may be available on NFD Homepage." - The pointer at the top of README.md should more directly address the person who wants to get the package installed, ie., "For complete documentation, including step-by-step installation instructions, go to NFD's homepage." If this change is made, then I'd suggest everything after the overview in the README can be in the installation instructions rather than the README. This would reduce redundancy. - Ideally, I think that all documents/sites/etc. should emphasize a single starting point for working practically with NFD ? either index.rst or the README. They should also point to the named-data web (rather than in Github, say). It seems like one should be able to open and navigate the docs on github, but this doesn't always work. I'd suggest that people browsing on github be more clearly directed to the named-data.net starting index page or their local docs for NFD installation. - Not sure that README.md and docs/README.rst should be different. ? - There is information about binaries spread across many places: the README.md, the FAQ, index.rst, and "Getting Started", the latter of which is described as source code (not binary) instructions by the README.md. Perhaps this could be consolidated to a single location and referenced as needed. - Also, there is discussion of platform build experience in "Additional Information" that would probably be more useful in the "Getting started" page near the source instructions. - There is an INSTALL.rst file, which is really source build instructions. It is not nearly as useful as "Getting Started with NFD". I'd suggest the two files should be combined and one eliminated... as "INSTALL" is the obvious place to look, but incomplete. Or, if you want to keep them separate, the INSTALL file needs to end with a pointer back to Getting Started to continue configuration, and perhaps start with proper clone instructions, etc. The INSTALL document should also be titled something more descriptive like "Building NFD from source". - The Wiki needs to be more prominently linked in the documentation, especially the README, as the place to do to get packet format and protocol information (if this is indeed the right starting point). - The documentation often needs to be read-ahead to be understood. This should be corrected where possible. For example, the "Getting Started with NFD" document sends you to install ndn-cxx and NFD according to instructions on other pages, but further down gives you the correct clone instructions. The clone instructions should be given first, then the reader send to the pages to follow the install instructions. Another example is the MacOS X build instructions (including the PKG_CONFIG fix), which come after what appear to be build instructions on all platforms. This should be re-arranged or at least have different titles to be more clear. - Each document in index.rst should have a short explanation of what people will find there / why they should go there. For example, "Getting started" is how to install, whereas README provides project background, etc. - Consider having a pointer to RELEASE NOTES in the README, and certainly in index.rst. - The FAQ document is sort of a catch-all. To me, all of the answers really should be in the appropriate sections of documentation, rather than in a FAQ list without an index, but I understand its current purpose while the documentation is still in its infancy. In particular, though, "How to configure NFD security" should be the stub of its a separate document on NFD security configuration. Also, I wonder if, though, the FAQ should be a Wiki page, so it is more easily community-editable? - The default security actions that are performed by the binary installations should be described (and motivated) so they can be duplicated by those doing source installs. And/or, scripts should be provided to do the same things in the source installs. Further, though NFD ships with no security configuration, it does by default create identities the first time it is run. This should be better described. - Unit tests are not only needed by NFD developers, they are of interest for anyone wanting to check for problems building this new code on their hosts. (I had to run them to help narrow down a problem with EthernetFace on my machine.) The unit test installation instructions should be included in the main installation instructions. Since there is nothing else of consequence in README-dev, I think that file should be removed for now, and a pointer provided in the README to the NFD wiki, which I'm sure contains more detailed and up-to-date information for developers. - The test producer and consumer distributed with the ndn-cxx library can't be run without NFD. The docs should mention this even though the apps throw an error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Aug 18 13:25:11 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 18 Aug 2014 20:25:11 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: Message-ID: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From dibenede at cs.colostate.edu Mon Aug 18 13:38:05 2014 From: dibenede at cs.colostate.edu (Steve DiBenedetto) Date: Mon, 18 Aug 2014 14:38:05 -0600 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: References: Message-ID: On Aug 18, 2014, at 2:25 PM, Burke, Jeff wrote: > > Hi folks, > > Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . > > > We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. > > Please see: > http://redmine.named-data.net/issues/1850 > > Thanks, > Jeff > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at UCLA.EDU Mon Aug 18 13:49:59 2014 From: alexander.afanasyev at UCLA.EDU (Alex Afanasyev) Date: Mon, 18 Aug 2014 15:49:59 -0500 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: References: Message-ID: <3B03E99D-8560-4A92-B0A2-B67025501C49@ucla.edu> On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto wrote: > > On Aug 18, 2014, at 2:25 PM, Burke, Jeff wrote: > >> >> Hi folks, >> >> Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? > > I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf --- Alex > >> >> >> We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. >> >> Please see: >> http://redmine.named-data.net/issues/1850 >> >> Thanks, >> Jeff >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Aug 18 14:35:41 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Mon, 18 Aug 2014 21:35:41 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: <3B03E99D-8560-4A92-B0A2-B67025501C49@ucla.edu> Message-ID: Right, thanks. I basically understand how all of this works, but think that we may want to explicitly describe it in the documentation rather than leave it implied by redmine entries and code... then we can make sure that the specification is followed in NDN-CCL. So, the idea is that client.conf provides per-user configuration for NDN applications that gives the default keystore and mechanism for connecting to the local daemon. For now, the default keystore specified in this configuration must be used if an application wants to interact with the identities/keys manipulated by the ndnsec tools. Is that the right way to state it? A few questions: - NFD and NRD use their own system-wide configuration file, not client.conf. The unix socket they use is defined in their nfd.conf file. But where is the keystore for the keys used by NFD and NRD, and how is it configured? I couldn't find this in the developers guide. - It is assumed that there is only one installation of NFD/NRD per host. So, shouldn't the socket configuration and protocol be per-host rather than per user? I understand that for convenience they may be in client.conf, but want to confirm... perhaps their should be a client-default.conf in the same place as nfd.conf, and just override it from ~/.ndn/client.conf? - There is an operator identity created by default at the time of installation of NFD/NRD. This seems to be associated with the user who installed the software. For NDN applications that run under other (non-root) users, is the idea that they can use their own client.conf settings, but the operator of the machine will need to authorize their key to sign things like prefix registration commands for the daemon? - Because it uses ndn-cxx, ndnsec manipulates the PIB/TPM for the current user, based on the settings in that user's client.conf. Is that correct? This may need to be clarified in the documentation. Also, ndnsec-list does not seem to take into account the TPM setting, it just lists all of the keys in the PIB for the user. Perhaps it could indicate which TPM they are stored in, to aid debugging? Thanks, Jeff From: Alex Afanasyev > Date: Mon, 18 Aug 2014 15:49:59 -0500 To: Steve DiBenedetto > Cc: Jeff Burke >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto > wrote: On Aug 18, 2014, at 2:25 PM, Burke, Jeff > wrote: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf --- Alex We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Mon Aug 18 19:10:44 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Tue, 19 Aug 2014 02:10:44 +0000 Subject: [Nfd-dev] [Operators] monitoring data plane In-Reply-To: Message-ID: Also, in case it's helpful, I've updated (hastily) the ndn-ping example in ndn-js to work with the current ping server and NFD. https://github.com/named-data/ndn-js/tree/master/examples/ndnping A live version is here: http://named-data.net/apps/live/ndn-ping.html This only connects to a single hub, but perhaps it would be useful in building a multi-hub test, though the code is not all that well written (sorry). Jeff From: Junxiao Shi > Date: Sat, 16 Aug 2014 09:25:00 -0700 To: "bzhang at cs.arizona.edu" > Cc: ">" >, ">" > Subject: Re: [Nfd-dev] [Operators] monitoring data plane Dear folks An alternate is: write a program to connect to each HUB via ndn-cxx's TcpTransport, and send ping Interest over there. Benefit: This program needs to run on only one machine. No deployment on HUBs. Drawback: RTT includes the delay between the laptop running this program and the HUB - but it's sufficient to test reachability. Please check out the reachability page for my ndn6.tk node: http://yoursunny.com/p/ndn6/reachability/ It uses WebSocket connections to every router. It works on recent version of Chrome, including desktop and Android editions. "NDN6" and "UCLA/v6" will appear red if your laptop doesn't have IPv6. Yours, Junxiao On Sat, Aug 16, 2014 at 8:03 AM, Beichuan Zhang > wrote: Hi John, The current testbed status page monitors the control plane, i.e., prefixes in the routing table. I think we also need monitor data plane reachability e.g., measured by ndnping. More specifically, - Each node runs a script that reads the list of prefixes from a local file, and ndnping each prefix for a number of times, say 10. The output is published as an HTML or XML page under the current web server on the node. The measurement is done periodically. - The stat collection script at WashU retrieves the page from individual node, parse them, and the result is another table to the current status page. This table has the same layout as the current table, but in each cell it shows the % of ping packets received in the most recent measurement. This information would be helpful once apps are deployed on the testbed. What do you think? Beichuan _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From yingdi at CS.UCLA.EDU Tue Aug 19 09:25:42 2014 From: yingdi at CS.UCLA.EDU (Yingdi Yu) Date: Tue, 19 Aug 2014 11:25:42 -0500 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: References: Message-ID: Hi Jeff, On Aug 18, 2014, at 4:35 PM, Burke, Jeff wrote: > So, the idea is that client.conf provides per-user configuration for NDN applications that gives the default keystore and mechanism for connecting to the local daemon. For now, the default keystore specified in this configuration must be used if an application wants to interact with the identities/keys manipulated by the ndnsec tools. > Is that the right way to state it? This client.conf is the configuration file of ndn-cxx, so all apps (including ndnsec) compiled against ndn-cxx will use the keystore info in this file as the default configuration. If an app wants to access the identities/keys created through ndn-cxx, the paths in the conf file should be the right place to go. > - NFD and NRD use their own system-wide configuration file, not client.conf. The unix socket they use is defined in their nfd.conf file. But where is the keystore for the keys used by NFD and NRD, and how is it configured? I couldn't find this in the developers guide. NFD and NRD are built against ndn-cxx, so they use the keystore info in the client.conf. We will clarify that in the dev guide. > - It is assumed that there is only one installation of NFD/NRD per host. So, shouldn't the socket configuration and protocol be per-host rather than per user? I understand that for convenience they may be in client.conf, but want to confirm... We may have per-host config for socket, but for keystore info, it might be better to keep per-user configuration. > perhaps their should be a client-default.conf in the same place as nfd.conf, and just override it from ~/.ndn/client.conf? I think a client-default.conf is possible if we set user's home directory as the default path to keystore. > - There is an operator identity created by default at the time of installation of NFD/NRD. This seems to be associated with the user who installed the software. For NDN applications that run under other (non-root) users, is the idea that they can use their own client.conf settings, but the operator of the machine will need to authorize their key to sign things like prefix registration commands for the daemon? Ideally yes, but command interest validation is turned off. > - Because it uses ndn-cxx, ndnsec manipulates the PIB/TPM for the current user, based on the settings in that user's client.conf. Is that correct? This may need to be clarified in the documentation. Also, ndnsec-list does not seem to take into account the TPM setting, it just lists all of the keys in the PIB for the user. Perhaps it could indicate which TPM they are stored in, to aid debugging? ndn-cxx internally maintains the consistency between PIB and TPM. Right now ndn-cxx assumed that there is only one TPM for one PIB, so we do not list the TPM information for each key. But I think we may need the TPM info when PIB handles multiple TPMs at the same time. Yingdi > > > Thanks, > Jeff > > > From: Alex Afanasyev > Date: Mon, 18 Aug 2014 15:49:59 -0500 > To: Steve DiBenedetto > Cc: Jeff Burke , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf > >> >> On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto wrote: >> >>> >>> On Aug 18, 2014, at 2:25 PM, Burke, Jeff wrote: >>> >>>> >>>> Hi folks, >>>> >>>> Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? >>> >>> I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . >> >> This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf >> >> --- >> Alex >> >>> >>>> >>>> >>>> We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. >>>> >>>> Please see: >>>> http://redmine.named-data.net/issues/1850 >>>> >>>> Thanks, >>>> Jeff >>>> >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jburke at remap.ucla.edu Tue Aug 19 09:44:24 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Tue, 19 Aug 2014 16:44:24 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: Message-ID: Hi Yingdi, Thanks for the reply ? comments below. Jeff From: Yingdi Yu > Date: Tue, 19 Aug 2014 11:25:42 -0500 To: Jeff Burke > Cc: Alexander Afanasyev >, Steve DiBenedetto >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf Hi Jeff, On Aug 18, 2014, at 4:35 PM, Burke, Jeff > wrote: So, the idea is that client.conf provides per-user configuration for NDN applications that gives the default keystore and mechanism for connecting to the local daemon. For now, the default keystore specified in this configuration must be used if an application wants to interact with the identities/keys manipulated by the ndnsec tools. Is that the right way to state it? This client.conf is the configuration file of ndn-cxx, so all apps (including ndnsec) compiled against ndn-cxx will use the keystore info in this file as the default configuration. If an app wants to access the identities/keys created through ndn-cxx, the paths in the conf file should be the right place to go. Alex's Issue #1850 suggests a stronger idea - that client.conf is for all libraries / applications interacting locally with NFD/NRD. Is that correct? - NFD and NRD use their own system-wide configuration file, not client.conf. The unix socket they use is defined in their nfd.conf file. But where is the keystore for the keys used by NFD and NRD, and how is it configured? I couldn't find this in the developers guide. NFD and NRD are built against ndn-cxx, so they use the keystore info in the client.conf. We will clarify that in the dev guide. I'm still not sure that I understand why NFD/NRD should use a user's ~/.ndn/client.conf for their identity, if they are system-wide services. This means that NFD and NRD will have different identities/signing keys on the same host if they are started by different users? Shouldn't the daemon's identities be stable per-host unless changed by the owner/operator? - It is assumed that there is only one installation of NFD/NRD per host. So, shouldn't the socket configuration and protocol be per-host rather than per user? I understand that for convenience they may be in client.conf, but want to confirm... We may have per-host config for socket, but for keystore info, it might be better to keep per-user configuration. If NFD/NRD are system-wide services, what is the motivation for this? (If they always ran under their own user, and the keystore was that of the NFD "user" this would make sense to me. But, this is not the default behavior yet.) perhaps their should be a client-default.conf in the same place as nfd.conf, and just override it from ~/.ndn/client.conf? I think a client-default.conf is possible if we set user's home directory as the default path to keystore. Yes, makes sense. I'm not sure it's necessary if the NRD/NFD identity issue is solved in a different way. - There is an operator identity created by default at the time of installation of NFD/NRD. This seems to be associated with the user who installed the software. For NDN applications that run under other (non-root) users, is the idea that they can use their own client.conf settings, but the operator of the machine will need to authorize their key to sign things like prefix registration commands for the daemon? Ideally yes, but command interest validation is turned off. Yes, but hopefully not for long :) It seems like there is a potential bottleneck here, where on a multi-user host the operator will need to manually sign each user's key before they can register prefixes, which is a basic part of NDN communication. Perhaps there should be an optional configuration mechanism available that automatically signs the keys of all authenticated users (who optionally are members of some group) on the host. - Because it uses ndn-cxx, ndnsec manipulates the PIB/TPM for the current user, based on the settings in that user's client.conf. Is that correct? This may need to be clarified in the documentation. Also, ndnsec-list does not seem to take into account the TPM setting, it just lists all of the keys in the PIB for the user. Perhaps it could indicate which TPM they are stored in, to aid debugging? ndn-cxx internally maintains the consistency between PIB and TPM. Right now ndn-cxx assumed that there is only one TPM for one PIB, so we do not list the TPM information for each key. But I think we may need the TPM info when PIB handles multiple TPMs at the same time. Ok. This causes some problems with NFD/NRD and probably other applications in cases where you are experimenting with two different TPMs. See Bug #1889. Thanks! Jeff Yingdi Thanks, Jeff From: Alex Afanasyev > Date: Mon, 18 Aug 2014 15:49:59 -0500 To: Steve DiBenedetto > Cc: Jeff Burke >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto > wrote: On Aug 18, 2014, at 2:25 PM, Burke, Jeff > wrote: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf --- Alex We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Tue Aug 19 10:20:58 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 19 Aug 2014 12:20:58 -0500 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: References: Message-ID: <5445EE3D-D293-4A55-B6EB-189978A5D4AE@ucla.edu> On Aug 19, 2014, at 11:44 AM, Burke, Jeff wrote: > > Hi Yingdi, > Thanks for the reply ? comments below. > Jeff > > From: Yingdi Yu > Date: Tue, 19 Aug 2014 11:25:42 -0500 > To: Jeff Burke > Cc: Alexander Afanasyev , Steve DiBenedetto , "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf > >> >> Hi Jeff, >> >> On Aug 18, 2014, at 4:35 PM, Burke, Jeff wrote: >> >>> So, the idea is that client.conf provides per-user configuration for NDN applications that gives the default keystore and mechanism for connecting to the local daemon. For now, the default keystore specified in this configuration must be used if an application wants to interact with the identities/keys manipulated by the ndnsec tools. >>> Is that the right way to state it? >> >> This client.conf is the configuration file of ndn-cxx, so all apps (including ndnsec) compiled against ndn-cxx will use the keystore info in this file as the default configuration. If an app wants to access the identities/keys created through ndn-cxx, the paths in the conf file should be the right place to go. > > > Alex's Issue #1850 suggests a stronger idea - that client.conf is for all libraries / applications interacting locally with NFD/NRD. Is that correct? It is for "all" libraries (applications). >> >>> - NFD and NRD use their own system-wide configuration file, not client.conf. The unix socket they use is defined in their nfd.conf file. But where is the keystore for the keys used by NFD and NRD, and how is it configured? I couldn't find this in the developers guide. >> >> NFD and NRD are built against ndn-cxx, so they use the keystore info in the client.conf. We will clarify that in the dev guide. > > > I'm still not sure that I understand why NFD/NRD should use a user's ~/.ndn/client.conf for their identity, if they are system-wide services. This means that NFD and NRD will have different identities/signing keys on the same host if they are started by different users? Shouldn't the daemon's identities be stable per-host unless changed by the owner/operator? I think the fact that NFD and NRD are using ~/.ndn/client.conf is just an artifact of your installation. They are not suppose to be running as normal user. One is running as root, the other should be some special user (ndn user if using Ubuntu PPA or macports). In either case, ~/.ndn/client.conf should refer to different locations and there is "system-wide" /etc/ndn/client.conf (or /usr/local/etc/ndn/client.conf, depending on your installation) which kicks in when ~/.ndn/client.conf doesn't exist. One more thing. Both ubuntu and macports are using trick with updating HOME variable before starting nfd, nrd, and all other daemons. This ensures that deamons/apps don't share client.conf and security credentials. >> >>> - It is assumed that there is only one installation of NFD/NRD per host. So, shouldn't the socket configuration and protocol be per-host rather than per user? I understand that for convenience they may be in client.conf, but want to confirm... >> >> We may have per-host config for socket, but for keystore info, it might be better to keep per-user configuration. > > > If NFD/NRD are system-wide services, what is the motivation for this? (If they always ran under their own user, and the keystore was that of the NFD "user" this would make sense to me. But, this is not the default behavior yet.) System-wide services do not (not suposed to) use user's client.conf. >> >>> perhaps their should be a client-default.conf in the same place as nfd.conf, and just override it from ~/.ndn/client.conf? >> >> I think a client-default.conf is possible if we set user's home directory as the default path to keystore. > > > Yes, makes sense. I'm not sure it's necessary if the NRD/NFD identity issue is solved in a different way. > >> >>> - There is an operator identity created by default at the time of installation of NFD/NRD. This seems to be associated with the user who installed the software. For NDN applications that run under other (non-root) users, is the idea that they can use their own client.conf settings, but the operator of the machine will need to authorize their key to sign things like prefix registration commands for the daemon? >> >> Ideally yes, but command interest validation is turned off. > > > Yes, but hopefully not for long :) > > It seems like there is a potential bottleneck here, where on a multi-user host the operator will need to manually sign each user's key before they can register prefixes, which is a basic part of NDN communication. Perhaps there should be an optional configuration mechanism available that automatically signs the keys of all authenticated users (who optionally are members of some group) on the host. > >> >>> - Because it uses ndn-cxx, ndnsec manipulates the PIB/TPM for the current user, based on the settings in that user's client.conf. Is that correct? This may need to be clarified in the documentation. Also, ndnsec-list does not seem to take into account the TPM setting, it just lists all of the keys in the PIB for the user. Perhaps it could indicate which TPM they are stored in, to aid debugging? >> >> ndn-cxx internally maintains the consistency between PIB and TPM. Right now ndn-cxx assumed that there is only one TPM for one PIB, so we do not list the TPM information for each key. But I think we may need the TPM info when PIB handles multiple TPMs at the same time. > > > Ok. This causes some problems with NFD/NRD and probably other applications in cases where you are experimenting with two different TPMs. See Bug #1889. It was never intended to use multiple TPMs in the first place. It just a current accident that we have multiple ones and behavior is not defined if there is a mix up. --- Alex > > Thanks! > Jeff > > >> >> Yingdi >> >> >>> >>> >>> Thanks, >>> Jeff >>> >>> >>> From: Alex Afanasyev >>> Date: Mon, 18 Aug 2014 15:49:59 -0500 >>> To: Steve DiBenedetto >>> Cc: Jeff Burke , "nfd-dev at lists.cs.ucla.edu" >>> Subject: Re: [Nfd-dev] Intended scope of client.conf >>> >>>> >>>> On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto wrote: >>>> >>>>> >>>>> On Aug 18, 2014, at 2:25 PM, Burke, Jeff wrote: >>>>> >>>>>> >>>>>> Hi folks, >>>>>> >>>>>> Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? >>>>> >>>>> I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . >>>> >>>> This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf >>>> >>>> --- >>>> Alex >>>> >>>>> >>>>>> >>>>>> >>>>>> We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. >>>>>> >>>>>> Please see: >>>>>> http://redmine.named-data.net/issues/1850 >>>>>> >>>>>> Thanks, >>>>>> Jeff >>>>>> >>>>>> _______________________________________________ >>>>>> Nfd-dev mailing list >>>>>> Nfd-dev at lists.cs.ucla.edu >>>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>>> >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Wed Aug 20 04:20:14 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 20 Aug 2014 04:20:14 -0700 Subject: [Nfd-dev] [Operators] monitoring data plane In-Reply-To: References: Message-ID: Hi Jeff http://yoursunny.com/p/ndn6/reachability/ is a multi-HUB test. Both programs have the same drawback: measured delay includes not only the delay between HUBs, but also the delay from laptop to connected HUB. In making this multi-HUB reachability test, I have the following experience: - Do not send ping as first Interest, because its measured delay may contain the time spent to establish WebSocket connection. Instead, send a dummy Interest (eg. "ndn:/"), and send the first ping after the dummy Interest is satisfied or timed out. - Do not send too many Interests at the same time. Delay is significantly increased if rate is more than 5 Interests/second, due to CPU bottleneck (on a Windows 7 laptop). Chrome's CPU usage should be kept under 2% for measured delay to be accurate. - ndnpingserver must use RSA signature. SHA256 digest signatures cannot be decoded by ndn-js (Feature #1851 ). Yours, Junxiao On Mon, Aug 18, 2014 at 7:10 PM, Burke, Jeff wrote: > > Also, in case it's helpful, I've updated (hastily) the ndn-ping example > in ndn-js to work with the current ping server and NFD. > https://github.com/named-data/ndn-js/tree/master/examples/ndnping > > A live version is here: > http://named-data.net/apps/live/ndn-ping.html > > This only connects to a single hub, but perhaps it would be useful in > building a multi-hub test, though the code is not all that well written > (sorry). > > Jeff > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Wed Aug 20 08:57:33 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Wed, 20 Aug 2014 15:57:33 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: <5445EE3D-D293-4A55-B6EB-189978A5D4AE@ucla.edu> Message-ID: Hi Alex, A few comments below. Jeff From: Alex Afanasyev > Date: Tue, 19 Aug 2014 12:20:58 -0500 To: Jeff Burke > Cc: Yingdi Yu >, Steve DiBenedetto >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf On Aug 19, 2014, at 11:44 AM, Burke, Jeff > wrote: Hi Yingdi, Thanks for the reply ? comments below. Jeff From: Yingdi Yu > Date: Tue, 19 Aug 2014 11:25:42 -0500 To: Jeff Burke > Cc: Alexander Afanasyev >, Steve DiBenedetto >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf Hi Jeff, On Aug 18, 2014, at 4:35 PM, Burke, Jeff > wrote: So, the idea is that client.conf provides per-user configuration for NDN applications that gives the default keystore and mechanism for connecting to the local daemon. For now, the default keystore specified in this configuration must be used if an application wants to interact with the identities/keys manipulated by the ndnsec tools. Is that the right way to state it? This client.conf is the configuration file of ndn-cxx, so all apps (including ndnsec) compiled against ndn-cxx will use the keystore info in this file as the default configuration. If an app wants to access the identities/keys created through ndn-cxx, the paths in the conf file should be the right place to go. Alex's Issue #1850 suggests a stronger idea - that client.conf is for all libraries / applications interacting locally with NFD/NRD. Is that correct? It is for "all" libraries (applications). Ok, I'd suggest there should be a spec for this eventually; for now it is self-explanatory, though. - NFD and NRD use their own system-wide configuration file, not client.conf. The unix socket they use is defined in their nfd.conf file. But where is the keystore for the keys used by NFD and NRD, and how is it configured? I couldn't find this in the developers guide. NFD and NRD are built against ndn-cxx, so they use the keystore info in the client.conf. We will clarify that in the dev guide. I'm still not sure that I understand why NFD/NRD should use a user's ~/.ndn/client.conf for their identity, if they are system-wide services. This means that NFD and NRD will have different identities/signing keys on the same host if they are started by different users? Shouldn't the daemon's identities be stable per-host unless changed by the owner/operator? I think the fact that NFD and NRD are using ~/.ndn/client.conf is just an artifact of your installation. They are not suppose to be running as normal user. One is running as root, the other should be some special user (ndn user if using Ubuntu PPA or macports). In either case, ~/.ndn/client.conf should refer to different locations and there is "system-wide" /etc/ndn/client.conf (or /usr/local/etc/ndn/client.conf, depending on your installation) which kicks in when ~/.ndn/client.conf doesn't exist. Yes, I understand now. I think this is just a documentation issue: Originally I followed the instructions in "Getting started with NFD" pretty carefully to install from source, as I prefer to be able to experiment with the source and not rely on macports or binary installations. There is nothing that I could find that suggests it is necessary to setup NRD/NFD to run in this way, or what the implications are if you don't do it. If it is not "normal" to run the daemons as your own user, I think this just needs to be explained in the documentation for installing from source. The FAQ section on "How to run NFD as non-root user" doesn't explain the implications, nor cover all of the details of what needs to be done in terms of setting file permissions, creating keys (perhaps), etc. in addition to the config file step. (Also, it isn't clear that NFD itself moves from root to the less-privileged user, but NRD is always run as the less-privileged user.) Since I prefer to have the user called nfd and not ndn (too general), I went through and manually followed the steps that the MacPorts installation does, modified also to point to what I am already compiling from source, and everything works fine. So perhaps that at some point that could be converted to a script independent of the Macports installation that could be run for source installs? And/or the steps documented somewhere else... At a minimum, I think the documentation should point out the importance of using one of the package installs if you want to get privileges and keystores correctly set up... As an aside, in looking at Mac OS X launchd, I did notice the following in the man page, which may have implications for NFD: A daemon or agent launched by launchd SHOULD NOT do the following as a part of their startup initialization: o Setup the user ID or group ID. o Setup the working directory. o chroot(2) o setsid(2) o Close "stray" file descriptors. o Change stdio(3) to /dev/null. o Setup resource limits with setrusage(2). o Setup priority with setpriority(2). o Ignore the SIGTERM signal. One more thing. Both ubuntu and macports are using trick with updating HOME variable before starting nfd, nrd, and all other daemons. This ensures that deamons/apps don't share client.conf and security credentials. - It is assumed that there is only one installation of NFD/NRD per host. So, shouldn't the socket configuration and protocol be per-host rather than per user? I understand that for convenience they may be in client.conf, but want to confirm... We may have per-host config for socket, but for keystore info, it might be better to keep per-user configuration. If NFD/NRD are system-wide services, what is the motivation for this? (If they always ran under their own user, and the keystore was that of the NFD "user" this would make sense to me. But, this is not the default behavior yet.) System-wide services do not (not suposed to) use user's client.conf. perhaps their should be a client-default.conf in the same place as nfd.conf, and just override it from ~/.ndn/client.conf? I think a client-default.conf is possible if we set user's home directory as the default path to keystore. Yes, makes sense. I'm not sure it's necessary if the NRD/NFD identity issue is solved in a different way. - There is an operator identity created by default at the time of installation of NFD/NRD. This seems to be associated with the user who installed the software. For NDN applications that run under other (non-root) users, is the idea that they can use their own client.conf settings, but the operator of the machine will need to authorize their key to sign things like prefix registration commands for the daemon? Ideally yes, but command interest validation is turned off. Yes, but hopefully not for long :) It seems like there is a potential bottleneck here, where on a multi-user host the operator will need to manually sign each user's key before they can register prefixes, which is a basic part of NDN communication. Perhaps there should be an optional configuration mechanism available that automatically signs the keys of all authenticated users (who optionally are members of some group) on the host. - Because it uses ndn-cxx, ndnsec manipulates the PIB/TPM for the current user, based on the settings in that user's client.conf. Is that correct? This may need to be clarified in the documentation. Also, ndnsec-list does not seem to take into account the TPM setting, it just lists all of the keys in the PIB for the user. Perhaps it could indicate which TPM they are stored in, to aid debugging? ndn-cxx internally maintains the consistency between PIB and TPM. Right now ndn-cxx assumed that there is only one TPM for one PIB, so we do not list the TPM information for each key. But I think we may need the TPM info when PIB handles multiple TPMs at the same time. Ok. This causes some problems with NFD/NRD and probably other applications in cases where you are experimenting with two different TPMs. See Bug #1889. It was never intended to use multiple TPMs in the first place. It just a current accident that we have multiple ones and behavior is not defined if there is a mix up. Yes, I understand. I just think NFD/NRD should not fail to start in this circumstances, as long as they can load the keys they need. --- Alex Thanks! Jeff Yingdi Thanks, Jeff From: Alex Afanasyev > Date: Mon, 18 Aug 2014 15:49:59 -0500 To: Steve DiBenedetto > Cc: Jeff Burke >, "nfd-dev at lists.cs.ucla.edu" > Subject: Re: [Nfd-dev] Intended scope of client.conf On Aug 18, 2014, at 3:38 PM, Steve DiBenedetto > wrote: On Aug 18, 2014, at 2:25 PM, Burke, Jeff > wrote: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? I'm not aware of a full specification for the current client.conf. Here's the original client.conf redmine issue that covers file format, location, and some parameters: http://redmine.named-data.net/issues/1364 . This is basically the spec. A new addition that was made long time ago was part of http://redmine.named-data.net/issues/1532 and is documented in http://redmine.named-data.net/projects/ndn-cxx/wiki/KeyChainConf --- Alex We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide.pesavento at lip6.fr Thu Aug 21 10:25:18 2014 From: davide.pesavento at lip6.fr (Davide Pesavento) Date: Thu, 21 Aug 2014 19:25:18 +0200 Subject: [Nfd-dev] Vehicular face design: preliminary questions Message-ID: Hi guys, We recently started designing the vehicular face for NFD (basically an ad-hoc wifi face that uses raw 802.11 frames, see #1216 and related tasks), and we have some preliminary questions for you, to help us plan the design and implementation phases. 1/ Is there a tentative release schedule for v0.3? Depending on it, we can decide whether submitting for v0.3 is feasible or if we should target v0.4 instead. 2/ A big chunk of support code that we will have to write is fairly separate and independent from the actual V2V face and Link Adaptation Layer, for example the location service, the GPS parser, some utility classes to read digital road maps, and so on. These components are prerequisites for the vehicular face but can be developed independently. Would it be acceptable to submit them early (say for v0.3) and have them merged, even if we realize that we're unable to finish the face in time for that version? 3/ Our old implementation used raw AF_PACKET sockets directly and was therefore Linux-only. We still have no interest in supporting platforms other than Linux (and possibly Android at a later time), so we'd like to know if we can keep using raw sockets. If there's opposition, we might investigate using libpcap, but this doesn't mean we will be testing the code on other platforms such as OS X, so it may or may not work. Another potential problem is that the V2V face may require boost version 1.54, when asio::generic::raw_protocol was introduced. 4/ Are we allowed to use threading inside the vehicular face? (in our old design, each V2V face creates a thread where all the "layer-2.5" management operations are performed) 5/ [not really a question] We identified at least two functionalities that can possibly be abstracted and factored out in a common superclass: (i) NDNLP fragmentation, shared with EthernetFace; (ii) Duplicate data suppression, shared with EthernetFace and MulticastUdpFace (this one might prove difficult because the suppression mechanisms that we've developed for V2V are substantially more complex). We are trying to come up with an initial design proposal ASAP, hopefully we can discuss it in person at NDNcomm. Thanks, Davide From shijunxiao at email.arizona.edu Thu Aug 21 15:43:52 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 21 Aug 2014 15:43:52 -0700 Subject: [Nfd-dev] Vehicular face design: preliminary questions In-Reply-To: References: Message-ID: Hi Davide 1/ targeting v0.3 or v0.4? The policy is one release every three months. According to this policy, the release date of v0.3 is projected to be Nov 25, 2014. 2/ submitting supporting components early Nothing prevents you from submitting supporting components early. One prior example is RttEstimator . It's intended to be used by forwarding strategies in the future, but no current strategy uses it. When I submit that component, I'm almost sure that I won't be able to complete a strategy that needs RttEstimator. I suggest you to provide a design document that explains how many components are there, their functionality and relationship. Without this document, the necessity of components would be challenged, and you'll have a hard time getting them approved. 3/ platform limitation Platform policy requires all projects to *build *on both Ubuntu and OSX. As I understand, if the addition of V2V face and supporting components doesn't cause a build failure, you're fine. One prior example is that EthernetFace does not support OSX with Boost 1.56. If you intend to support Ubuntu and Boost 1.54 only, I suggest adding a configuration option, so that V2V face and supporting components are compiled only if "./waf configure" is executed with "--with-v2v" option. 4/ threading The decision of not to use threading comes from Van's opinion at 20140115 meeting: context switching is expensive - a context switch costs around 100 instructions. Threading can be used if you can prove it's more efficient than a design without threading. I personally agree that it?s worth seconds of programmer time to save hours of CPU time . If you can illustrate that a design without threading is much more difficult to develop than a design with threading, and an implementation with threading won't become a bottleneck on V2V's typical operation environment (vehicle onboard computer), I'll agree with using threading. 5i/ NDNLP common superclass NDNLP component is already implemented individually. If you think it's effective to make a superclass, please propose its API. 5ii/ duplicate suppression superclass If you think it's effective to make a superclass, please propose its API. Yours, Junxiao On Thu, Aug 21, 2014 at 10:25 AM, Davide Pesavento wrote: > Hi guys, > > We recently started designing the vehicular face for NFD (basically an > ad-hoc wifi face that uses raw 802.11 frames, see #1216 and related > tasks), and we have some preliminary questions for you, to help us > plan the design and implementation phases. > > 1/ Is there a tentative release schedule for v0.3? Depending on it, we > can decide whether submitting for v0.3 is feasible or if we should > target v0.4 instead. > > 2/ A big chunk of support code that we will have to write is fairly > separate and independent from the actual V2V face and Link Adaptation > Layer, for example the location service, the GPS parser, some utility > classes to read digital road maps, and so on. These components are > prerequisites for the vehicular face but can be developed > independently. Would it be acceptable to submit them early (say for > v0.3) and have them merged, even if we realize that we're unable to > finish the face in time for that version? > > 3/ Our old implementation used raw AF_PACKET sockets directly and was > therefore Linux-only. We still have no interest in supporting > platforms other than Linux (and possibly Android at a later time), so > we'd like to know if we can keep using raw sockets. > If there's opposition, we might investigate using libpcap, but this > doesn't mean we will be testing the code on other platforms such as OS > X, so it may or may not work. > Another potential problem is that the V2V face may require boost > version 1.54, when asio::generic::raw_protocol was introduced. > > 4/ Are we allowed to use threading inside the vehicular face? (in our > old design, each V2V face creates a thread where all the "layer-2.5" > management operations are performed) > > 5/ [not really a question] We identified at least two functionalities > that can possibly be abstracted and factored out in a common > superclass: > (i) NDNLP fragmentation, shared with EthernetFace; > (ii) Duplicate data suppression, shared with EthernetFace and > MulticastUdpFace (this one might prove difficult because the > suppression mechanisms that we've developed for V2V are substantially > more complex). > > We are trying to come up with an initial design proposal ASAP, > hopefully we can discuss it in person at NDNcomm. > > Thanks, > Davide > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lanwang at memphis.edu Fri Aug 22 07:48:18 2014 From: lanwang at memphis.edu (Lan Wang (lanwang)) Date: Fri, 22 Aug 2014 14:48:18 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: References: Message-ID: <01B4D59A-2381-4FB2-87B8-540FDC75D9FA@memphis.edu> Any conclusion on this issue? Are there any changes necessary for nrd? Lan On Aug 18, 2014, at 3:25 PM, "Burke, Jeff" > wrote: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Fri Aug 22 10:18:15 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Fri, 22 Aug 2014 17:18:15 +0000 Subject: [Nfd-dev] Intended scope of client.conf In-Reply-To: <01B4D59A-2381-4FB2-87B8-540FDC75D9FA@memphis.edu> Message-ID: There are no changes required for NFD. We are going to implement client support in NDN-CCL based on the examples available for now, but think that a spec document is important eventually if all applications are supposed to use this file for their default configuration in talking to the forwarder. A separate but related issue is that instructions need to be provided for how to get nfd/nrd running properly under a specific user (say, ndn) as Alex and others have indicated is the correct way to run it. This is done under the package installations but not detailed for those building from source. Jeff From: "Lan Wang (lanwang)" > Date: Fri, 22 Aug 2014 14:48:18 +0000 To: Jeff Burke > Cc: ">" > Subject: Re: [Nfd-dev] Intended scope of client.conf Any conclusion on this issue? Are there any changes necessary for nrd? Lan On Aug 18, 2014, at 3:25 PM, "Burke, Jeff" > wrote: Hi folks, Is there any specification for the purpose and scope of client.conf. E.g., how it can be located, what can be specified there, and what code should pay attention to it? We need to consider how to handle this specification in the NDN-CCL libraries, and would prefer to work from a design specification rather than existing code in ndn-cxx. For code like ndn-js, it may not always apply, so we need to understand defaults and assumptions, too. Please see: http://redmine.named-data.net/issues/1850 Thanks, Jeff _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jburke at remap.ucla.edu Fri Aug 22 10:20:57 2014 From: jburke at remap.ucla.edu (Burke, Jeff) Date: Fri, 22 Aug 2014 17:20:57 +0000 Subject: [Nfd-dev] NFD documentation comments In-Reply-To: Message-ID: Hi, Submitted as Task #1905. http://redmine.named-data.net/issues/1905 Jeff From: Jeff Burke > Date: Mon, 18 Aug 2014 15:51:26 +0000 To: "nfd-dev at lists.cs.ucla.edu" > Subject: [Nfd-dev] NFD documentation comments Hi NFD team, Yesterday I tried to methodically follow the directions for installing NFD on a few machines, and wanted to share some comments on the documentation. I have installed it in the past, but tried to forget what I remembered. :) Please take/ignore as you will. I hope the comments are helpful. cheers, Jeff --- Once I figured out where to look, the docs are pretty easy to figure out and things build/install smoothly. Congrats! The main confusing thing is that there are several candidate starting points (all of which I ended up needing), and replication of documentation on the Wiki, github source distribution, website. - I'd suggest not duplicating pages on the Wiki and the Web. It leads to confusion and inconsistencies. For example, the Getting Started with NFD has this circular reference: On the web: "A more recent version of this document may be available on NFD Wiki." On the wiki: "A more recent version of this document may be available on NFD Homepage." - The pointer at the top of README.md should more directly address the person who wants to get the package installed, ie., "For complete documentation, including step-by-step installation instructions, go to NFD's homepage." If this change is made, then I'd suggest everything after the overview in the README can be in the installation instructions rather than the README. This would reduce redundancy. - Ideally, I think that all documents/sites/etc. should emphasize a single starting point for working practically with NFD ? either index.rst or the README. They should also point to the named-data web (rather than in Github, say). It seems like one should be able to open and navigate the docs on github, but this doesn't always work. I'd suggest that people browsing on github be more clearly directed to the named-data.net starting index page or their local docs for NFD installation. - Not sure that README.md and docs/README.rst should be different. ? - There is information about binaries spread across many places: the README.md, the FAQ, index.rst, and "Getting Started", the latter of which is described as source code (not binary) instructions by the README.md. Perhaps this could be consolidated to a single location and referenced as needed. - Also, there is discussion of platform build experience in "Additional Information" that would probably be more useful in the "Getting started" page near the source instructions. - There is an INSTALL.rst file, which is really source build instructions. It is not nearly as useful as "Getting Started with NFD". I'd suggest the two files should be combined and one eliminated... as "INSTALL" is the obvious place to look, but incomplete. Or, if you want to keep them separate, the INSTALL file needs to end with a pointer back to Getting Started to continue configuration, and perhaps start with proper clone instructions, etc. The INSTALL document should also be titled something more descriptive like "Building NFD from source". - The Wiki needs to be more prominently linked in the documentation, especially the README, as the place to do to get packet format and protocol information (if this is indeed the right starting point). - The documentation often needs to be read-ahead to be understood. This should be corrected where possible. For example, the "Getting Started with NFD" document sends you to install ndn-cxx and NFD according to instructions on other pages, but further down gives you the correct clone instructions. The clone instructions should be given first, then the reader send to the pages to follow the install instructions. Another example is the MacOS X build instructions (including the PKG_CONFIG fix), which come after what appear to be build instructions on all platforms. This should be re-arranged or at least have different titles to be more clear. - Each document in index.rst should have a short explanation of what people will find there / why they should go there. For example, "Getting started" is how to install, whereas README provides project background, etc. - Consider having a pointer to RELEASE NOTES in the README, and certainly in index.rst. - The FAQ document is sort of a catch-all. To me, all of the answers really should be in the appropriate sections of documentation, rather than in a FAQ list without an index, but I understand its current purpose while the documentation is still in its infancy. In particular, though, "How to configure NFD security" should be the stub of its a separate document on NFD security configuration. Also, I wonder if, though, the FAQ should be a Wiki page, so it is more easily community-editable? - The default security actions that are performed by the binary installations should be described (and motivated) so they can be duplicated by those doing source installs. And/or, scripts should be provided to do the same things in the source installs. Further, though NFD ships with no security configuration, it does by default create identities the first time it is run. This should be better described. - Unit tests are not only needed by NFD developers, they are of interest for anyone wanting to check for problems building this new code on their hosts. (I had to run them to help narrow down a problem with EthernetFace on my machine.) The unit test installation instructions should be included in the main installation instructions. Since there is nothing else of consequence in README-dev, I think that file should be removed for now, and a pointer provided in the README to the NFD wiki, which I'm sure contains more detailed and up-to-date information for developers. - The test producer and consumer distributed with the ndn-cxx library can't be run without NFD. The docs should mention this even though the apps throw an error. _______________________________________________ Nfd-dev mailing list Nfd-dev at lists.cs.ucla.edu http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide.pesavento at lip6.fr Fri Aug 22 13:00:43 2014 From: davide.pesavento at lip6.fr (Davide Pesavento) Date: Fri, 22 Aug 2014 22:00:43 +0200 Subject: [Nfd-dev] Vehicular face design: preliminary questions In-Reply-To: References: Message-ID: Hi Junxiao, Thanks for replying. On Fri, Aug 22, 2014 at 12:43 AM, Junxiao Shi wrote: > > 2/ submitting supporting components early > [...] > I suggest you to provide a design document that explains how many components > are there, their functionality and relationship. > Without this document, the necessity of components would be challenged, and > you'll have a hard time getting them approved. > Sure, we already intended to provide a design document before starting the implementation. > 3/ platform limitation > Platform policy requires all projects to build on both Ubuntu and OSX. > As I understand, if the addition of V2V face and supporting components > doesn't cause a build failure, you're fine. > One prior example is that EthernetFace does not support OSX with Boost 1.56. > > If you intend to support Ubuntu and Boost 1.54 only, I suggest adding a > configuration option, so that V2V face and supporting components are > compiled only if "./waf configure" is executed with "--with-v2v" option. > Ok, that is not a problem then. > 4/ threading > The decision of not to use threading comes from Van's opinion at 20140115 > meeting: context switching is expensive - a context switch costs around 100 > instructions. > Threading can be used if you can prove it's more efficient than a design > without threading. > > I personally agree that it?s worth seconds of programmer time to save hours > of CPU time. > If you can illustrate that a design without threading is much more difficult > to develop than a design with threading, and an implementation with > threading won't become a bottleneck on V2V's typical operation environment > (vehicle onboard computer), I'll agree with using threading. > The main reason why we chose multi-threading was that our Link Adaptation Layer (a.k.a. layer-2.5, inside the V2V face) performs several tasks that are completely independent from the main forwarder thread (such as distance calculation and handling of digital road maps) and needs to process all received (broadcast) frames in order to properly adjust retransmission timers, to handle implicit ACKs, directional ACKs, geo-tagging, and other things specific of the vehicular environment (only a fraction of all these packets are then passed to the forwarder). We didn't want all these tasks to interfere with the main forwarder loop and slow down packet processing on the other (non-vehicular) faces. Anyway, we don't know yet if it still makes sense for us to use a separate thread, given the new daemon design. We are still investigating the best approach and will present our proposal soon. So what we actually wanted to know with question 4/ was: is threading completely forbidden or will it be accepted if properly justified? >From your answer it seems that the latter is true. Thanks, Davide From davide.pesavento at lip6.fr Fri Aug 22 13:27:08 2014 From: davide.pesavento at lip6.fr (Davide Pesavento) Date: Fri, 22 Aug 2014 22:27:08 +0200 Subject: [Nfd-dev] Using (some features of) C++11 Message-ID: Hi guys, Can we start using some pieces of C++11 [1] in v0.3? Our oldest supported platform, Ubuntu 12.04, comes with gcc-4.6 that supports quite a few C++11 features already [2]. If we want to keep compatibility with even older Ubuntu LTS releases, 10.04 (EOL April 2015) comes with gcc-4.4, which still has a few interesting pieces of C++11 implemented [3], such as rvalue references, "auto" variables, initializer lists, and so on. Unless there's opposition to the idea of using C++11 at all, we should decide the minimum supported version of gcc/clang and come up with a set of C++11 features that we're allowed to use starting with v0.3 of ndn-cxx and NFD. Thanks, Davide [1] http://en.wikipedia.org/wiki/C%2B%2B11 [2] https://gcc.gnu.org/gcc-4.6/cxx0x_status.html [3] https://gcc.gnu.org/gcc-4.4/cxx0x_status.html From shijunxiao at email.arizona.edu Fri Aug 22 23:09:47 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Fri, 22 Aug 2014 23:09:47 -0700 Subject: [Nfd-dev] Using (some features of) C++11 In-Reply-To: References: Message-ID: Dear folks My opinion is: all C++11 features supported by all required platforms should be permitted. There is no reason to be conservative. It's also impractical to approve only certain features without enforcement from Jenkins. One example is that we have a limited set of approved Boost libraries http://redmine.named-data.net/projects/nfd/wiki/Boost but this limitation is never enforced. Yours, Junxiao On Aug 22, 2014 1:28 PM, "Davide Pesavento" wrote: > Hi guys, > > Can we start using some pieces of C++11 [1] in v0.3? > > Our oldest supported platform, Ubuntu 12.04, comes with gcc-4.6 that > supports quite a few C++11 features already [2]. If we want to keep > compatibility with even older Ubuntu LTS releases, 10.04 (EOL April > 2015) comes with gcc-4.4, which still has a few interesting pieces of > C++11 implemented [3], such as rvalue references, "auto" variables, > initializer lists, and so on. > > Unless there's opposition to the idea of using C++11 at all, we should > decide the minimum supported version of gcc/clang and come up with a > set of C++11 features that we're allowed to use starting with v0.3 of > ndn-cxx and NFD. > > Thanks, > Davide > > [1] http://en.wikipedia.org/wiki/C%2B%2B11 > [2] https://gcc.gnu.org/gcc-4.6/cxx0x_status.html > [3] https://gcc.gnu.org/gcc-4.4/cxx0x_status.html > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.afanasyev at ucla.edu Sat Aug 23 10:09:10 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sat, 23 Aug 2014 10:09:10 -0700 Subject: [Nfd-dev] new draft for public announcement Message-ID: <5D3ECECC-EFF0-4283-A7C2-F3DDB6BBD547@ucla.edu> Hi everybody, Please comment on the following public announcement email that we plan to send out on Monday (our official public release date). Some links on line linked page are currently placeholders, but should not be completely broken (let me know if they are). --- Alex ======================================================================================== Dear all, *** More detailed information about NFD release is available on NDN website: http://named-data.net/releases/NFD-0.2.0 *** We are pleased to announce the initial public release (version 0.2.0) of NDN Forwarding Daemon (NFD). NDN Forwarding Daemon (NFD) is a network forwarder that implements and evolves together with the Named Data Networking (NDN) protocol. More details about NFD, release notes, howtos, FAQ, and other useful resources about NFD are available online on official NFD's homepage (http://named-data.net/doc/NFD/0.2.0/). In addition to that, NFD's developer guide (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf) provides extensive and detailed description of implementation internals and is aimed to help extend current functionality of the forwarder. The main design goal of NFD is to support diverse experimentation with NDN architecture. The design emphasizes **modularity** and **extensibility** to allow easy experiments with new protocol features, algorithms, and applications. We have not fully optimized the code for performance. The intention is that performance optimizations are one type of experiments that developers can conduct by trying out different data structures and different algorithms; over time, better implementations may emerge within the same design framework. NFD will keep evolving in three aspects: improvement of the modularity framework, keeping up with the NDN protocol spec, and addition of new features. We hope to keep the modular framework stable and lean, allowing researchers to implement and experiment with various features of NDN architecture, some of which may eventually work into the protocol specification. NFD release is part of the new NDN Platform release version 0.3 (http://named-data.net/releases/platform-0.3), which include the following components: - NDN Forwarding Daemon (NFD) version 0.2.0 http://named-data.net/doc/NFD/0.2.0/ - ndn-cxx library version 0.2.0 http://named-data.net/doc/ndn-cxx/0.2.0/ + NDN C++ library with eXperimental eXtensions + ndnsec security tools to manage security identities and certificates - NDN Common Client libraries suite (NDN-CCL) version 0.3 http://named-data.net/releases/CCL-0.3 + NDN-CPP: C++ library + PyNDN2: Python library + NDN-JS: JavaScript library + Java library coming soon - Named Data Link State Routing Protocol (NLSR) version 0.1.0 http://named-data.net/doc/NLSR/0.1.0/ - Next generation of NDN repository (repo-ng): http://github.com/named-data/repo-ng - Ping Application For NDN (ndn-tlv-ping) version 0.2.0 http://github.com/named-data/ndn-tlv-ping - Traffic Generator For NDN (ndn-traffic-generator) version 0.2.0 http://github.com/named-data/ndn-traffic-generator --- NFD Team From christos at cs.colostate.edu Sun Aug 24 04:00:26 2014 From: christos at cs.colostate.edu (Christos Papadopoulos) Date: Sun, 24 Aug 2014 05:00:26 -0600 Subject: [Nfd-dev] new draft for public announcement In-Reply-To: <5D3ECECC-EFF0-4283-A7C2-F3DDB6BBD547@ucla.edu> References: <5D3ECECC-EFF0-4283-A7C2-F3DDB6BBD547@ucla.edu> Message-ID: <53F9C5CA.5020606@cs.colostate.edu> I shortened it a bit. Christos. ------------------------------------------------------------------- Dear all, We are pleased to announce the initial public release (version 0.2.0) of the NDN Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other useful resources are available at NFD's official webpage (http://named-data.net/doc/NFD/0.2.0/). Also available is the NFD developer's guide (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides a detailed description of the implementation internals. An important goal of NFD is to support experimentation with the NDN architecture. Thus, the current release emphasizes **modularity** and **extensibility** over performance to allow easy experimentation with new protocol features, algorithms, data structures and applications. We invite researchers to experiment with the existing code and submit enhancements both in terms of performance and new architecture features. This release is part of the new NDN Platform version 0.3 (http://named-data.net/releases/platform-0.3), which includes the following components: - The NDN Forwarding Daemon (NFD), version 0.2.0 http://named-data.net/doc/NFD/0.2.0/ - The ndn-cxx library, version 0.2.0 http://named-data.net/doc/ndn-cxx/0.2.0/ + The NDN C++ library with eXperimental eXtensions (CXX) + The ndnsec security tools to manage security identities and certificates - The NDN Common Client libraries suite (NDN-CCL), version 0.3 http://named-data.net/releases/CCL-0.3 + The NDN-CPP C++ library + The PyNDN2 Python library + The NDN-JS JavaScript library + (A Java library is coming soon) - The Named Data Link State Routing Protocol (NLSR) version 0.1.0 http://named-data.net/doc/NLSR/0.1.0/ - The next generation of NDN repository (repo-ng) http://github.com/named-data/repo-ng - A Ping Application For NDN (ndn-tlv-ping) version 0.2.0 http://github.com/named-data/ndn-tlv-ping - A Traffic Generator For NDN (ndn-traffic-generator) version 0.2.0 http://github.com/named-data/ndn-traffic-generator *** More detailed information aboutthe NFD release is available on the NDN website http://named-data.net/releases/NFD-0.2.0 *** The NFD Team. On 08/23/2014 11:09 AM, Alex Afanasyev wrote: > Hi everybody, > > Please comment on the following public announcement email that we plan to send out on Monday (our official public release date). > > Some links on line linked page are currently placeholders, but should not be completely broken (let me know if they are). > > --- > Alex > > ======================================================================================== > > Dear all, > > *** > More detailed information about NFD release is available on NDN website: > http://named-data.net/releases/NFD-0.2.0 > *** > > We are pleased to announce the initial public release (version 0.2.0) of NDN Forwarding > Daemon (NFD). NDN Forwarding Daemon (NFD) is a network forwarder that implements and > evolves together with the Named Data Networking (NDN) protocol. More details about NFD, > release notes, howtos, FAQ, and other useful resources about NFD are available online on > official NFD's homepage (http://named-data.net/doc/NFD/0.2.0/). In addition to that, > NFD's developer guide > (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf) provides > extensive and detailed description of implementation internals and is aimed to help extend > current functionality of the forwarder. > > The main design goal of NFD is to support diverse experimentation with NDN architecture. > The design emphasizes **modularity** and **extensibility** to allow easy experiments with > new protocol features, algorithms, and applications. We have not fully optimized the code > for performance. The intention is that performance optimizations are one type of > experiments that developers can conduct by trying out different data structures and > different algorithms; over time, better implementations may emerge within the same design > framework. > > NFD will keep evolving in three aspects: improvement of the modularity framework, keeping > up with the NDN protocol spec, and addition of new features. We hope to keep the modular > framework stable and lean, allowing researchers to implement and experiment with various > features of NDN architecture, some of which may eventually work into the protocol > specification. > > NFD release is part of the new NDN Platform release version 0.3 > (http://named-data.net/releases/platform-0.3), which include the following components: > > - NDN Forwarding Daemon (NFD) version 0.2.0 > http://named-data.net/doc/NFD/0.2.0/ > > - ndn-cxx library version 0.2.0 > http://named-data.net/doc/ndn-cxx/0.2.0/ > > + NDN C++ library with eXperimental eXtensions > + ndnsec security tools to manage security identities and certificates > > - NDN Common Client libraries suite (NDN-CCL) version 0.3 > http://named-data.net/releases/CCL-0.3 > > + NDN-CPP: C++ library > + PyNDN2: Python library > + NDN-JS: JavaScript library > + Java library coming soon > > - Named Data Link State Routing Protocol (NLSR) version 0.1.0 > http://named-data.net/doc/NLSR/0.1.0/ > > - Next generation of NDN repository (repo-ng): http://github.com/named-data/repo-ng > > - Ping Application For NDN (ndn-tlv-ping) version 0.2.0 > http://github.com/named-data/ndn-tlv-ping > > - Traffic Generator For NDN (ndn-traffic-generator) version 0.2.0 > http://github.com/named-data/ndn-traffic-generator > > --- > NFD Team > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From davide.pesavento at lip6.fr Sun Aug 24 08:20:51 2014 From: davide.pesavento at lip6.fr (Davide Pesavento) Date: Sun, 24 Aug 2014 17:20:51 +0200 Subject: [Nfd-dev] new draft for public announcement In-Reply-To: <53F9C5CA.5020606@cs.colostate.edu> References: <5D3ECECC-EFF0-4283-A7C2-F3DDB6BBD547@ucla.edu> <53F9C5CA.5020606@cs.colostate.edu> Message-ID: On Sun, Aug 24, 2014 at 1:00 PM, Christos Papadopoulos wrote: > I shortened it a bit. > > Christos. > ------------------------------------------------------------------- > > Dear all, > > We are pleased to announce the initial public release (version 0.2.0) of the Someone will be wondering why the "initial public release" is version 0.2 and not 0.1. Is it worth explaining why in a footnote? > NDN Forwarding Daemon (NFD). NFD is a network forwarder that implements the > Named Data Networking (NDN) protocol. More details about NFD, release > notes, HOWTOs, a FAQ and other useful resources are available at NFD's > official webpage (http://named-data.net/doc/NFD/0.2.0/). > > Also available is the NFD developer's guide > (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), > which provides a detailed description of the implementation internals. > > An important goal of NFD is to support experimentation with the NDN > architecture. Thus, the current release emphasizes **modularity** and > **extensibility** over performance to allow easy experimentation with new > protocol features, algorithms, data structures and applications. We invite > researchers to experiment with the existing code and submit enhancements > both in terms of performance and new architecture features. > > This release is part of the new NDN Platform version 0.3 > (http://named-data.net/releases/platform-0.3), which includes the following > components: > > - The NDN Forwarding Daemon (NFD), version 0.2.0 > http://named-data.net/doc/NFD/0.2.0/ > > - The ndn-cxx library, version 0.2.0 > http://named-data.net/doc/ndn-cxx/0.2.0/ > > + The NDN C++ library with eXperimental eXtensions (CXX) > + The ndnsec security tools to manage security identities and > certificates > > - The NDN Common Client libraries suite (NDN-CCL), version 0.3 > http://named-data.net/releases/CCL-0.3 > > + The NDN-CPP C++ library > + The PyNDN2 Python library > + The NDN-JS JavaScript library > + (A Java library is coming soon) > > - The Named Data Link State Routing Protocol (NLSR) version 0.1.0 > http://named-data.net/doc/NLSR/0.1.0/ > > - The next generation of NDN repository (repo-ng) > http://github.com/named-data/repo-ng No version number for repo-ng? > > - A Ping Application For NDN (ndn-tlv-ping) version 0.2.0 > http://github.com/named-data/ndn-tlv-ping > > - A Traffic Generator For NDN (ndn-traffic-generator) version 0.2.0 > http://github.com/named-data/ndn-traffic-generator > > *** > More detailed information aboutthe NFD release is available on the NDN Missing space in "aboutthe". > website > http://named-data.net/releases/NFD-0.2.0 > *** > > The NFD Team. > The rest looks very good to me. Thanks, Davide From alexander.afanasyev at ucla.edu Sun Aug 24 13:22:06 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sun, 24 Aug 2014 13:22:06 -0700 Subject: [Nfd-dev] new draft for public announcement In-Reply-To: References: <5D3ECECC-EFF0-4283-A7C2-F3DDB6BBD547@ucla.edu> <53F9C5CA.5020606@cs.colostate.edu> Message-ID: Thanks Christos and Davide! I still want to include link to the release html page a first thing in the email. Otherwise people would have to read through the email to get it (and since they may get tired and never get to more detailed page :-)) On Aug 24, 2014, at 8:20 AM, Davide Pesavento wrote: > On Sun, Aug 24, 2014 at 1:00 PM, Christos Papadopoulos > wrote: >> I shortened it a bit. >> >> Christos. >> ------------------------------------------------------------------- >> >> Dear all, >> >> We are pleased to announce the initial public release (version 0.2.0) of the > > Someone will be wondering why the "initial public release" is version > 0.2 and not 0.1. Is it worth explaining why in a footnote? I think if people would check release notes on NFD homepage, then they would get the idea. We could add extra bullet about this, but I don't think we need extra explanation. > >> NDN Forwarding Daemon (NFD). NFD is a network forwarder that implements the >> Named Data Networking (NDN) protocol. More details about NFD, release >> notes, HOWTOs, a FAQ and other useful resources are available at NFD's >> official webpage (http://named-data.net/doc/NFD/0.2.0/). >> >> Also available is the NFD developer's guide >> (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), >> which provides a detailed description of the implementation internals. >> >> An important goal of NFD is to support experimentation with the NDN >> architecture. Thus, the current release emphasizes **modularity** and >> **extensibility** over performance to allow easy experimentation with new >> protocol features, algorithms, data structures and applications. We invite >> researchers to experiment with the existing code and submit enhancements >> both in terms of performance and new architecture features. >> >> This release is part of the new NDN Platform version 0.3 >> (http://named-data.net/releases/platform-0.3), which includes the following >> components: >> >> - The NDN Forwarding Daemon (NFD), version 0.2.0 >> http://named-data.net/doc/NFD/0.2.0/ >> >> - The ndn-cxx library, version 0.2.0 >> http://named-data.net/doc/ndn-cxx/0.2.0/ >> >> + The NDN C++ library with eXperimental eXtensions (CXX) >> + The ndnsec security tools to manage security identities and >> certificates >> >> - The NDN Common Client libraries suite (NDN-CCL), version 0.3 >> http://named-data.net/releases/CCL-0.3 >> >> + The NDN-CPP C++ library >> + The PyNDN2 Python library >> + The NDN-JS JavaScript library >> + (A Java library is coming soon) >> >> - The Named Data Link State Routing Protocol (NLSR) version 0.1.0 >> http://named-data.net/doc/NLSR/0.1.0/ >> >> - The next generation of NDN repository (repo-ng) >> http://github.com/named-data/repo-ng > > No version number for repo-ng? > >> >> - A Ping Application For NDN (ndn-tlv-ping) version 0.2.0 >> http://github.com/named-data/ndn-tlv-ping >> >> - A Traffic Generator For NDN (ndn-traffic-generator) version 0.2.0 >> http://github.com/named-data/ndn-traffic-generator >> >> *** >> More detailed information aboutthe NFD release is available on the NDN > > Missing space in "aboutthe". > >> website >> http://named-data.net/releases/NFD-0.2.0 >> *** >> >> The NFD Team. >> > > The rest looks very good to me. > > Thanks, > Davide > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From jdd at seas.wustl.edu Sun Aug 24 19:59:57 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Sun, 24 Aug 2014 21:59:57 -0500 Subject: [Nfd-dev] nfd-status currentTime only updates every 5 seconds Message-ID: <53FAA6AD.6010208@seas.wustl.edu> We are using the ouput from nfd-status-http-server to calculate the link bandwidths for http://ndnmap.arl.wustl.edu/. nfd-status-http-server uses nfd-status to collect its data. We just noticed that The 'currentTime' field reported by nfd-status only updates about every 5 seconds. It looks like the data associated with each face updates immediately, but the currentTime reported does not. Is there a reason for this or is it a bug? I'll include below the output from a script looking at one face and currentTime while doing an ndnping across that face so the data associated with the face will be changing every second. I also print the 'date'. John Sun Aug 24 21:52:13 CDT 2014 currentTime=20140825T025213.076000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288096i 13876d 262018602B} out={241133i 20263d 33700396B}} Sun Aug 24 21:52:14 CDT 2014 currentTime=20140825T025213.076000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288106i 13877d 262020108B} out={241134i 20263d 33700447B}} Sun Aug 24 21:52:15 CDT 2014 currentTime=20140825T025213.076000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288117i 13878d 262021771B} out={241135i 20263d 33700498B}} Sun Aug 24 21:52:16 CDT 2014 currentTime=20140825T025213.076000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288131i 13879d 262023779B} out={241137i 20263d 33700646B}} Sun Aug 24 21:52:17 CDT 2014 currentTime=20140825T025213.076000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288140i 13880d 262025234B} out={241139i 20263d 33700748B}} Sun Aug 24 21:52:18 CDT 2014 currentTime=20140825T025218.372000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288149i 13881d 262026651B} out={241140i 20263d 33700799B}} Sun Aug 24 21:52:19 CDT 2014 currentTime=20140825T025218.372000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288155i 13882d 262027738B} out={241141i 20263d 33700850B}} Sun Aug 24 21:52:20 CDT 2014 currentTime=20140825T025218.372000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288166i 13883d 262029383B} out={241143i 20263d 33700998B}} Sun Aug 24 21:52:21 CDT 2014 currentTime=20140825T025218.372000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288179i 13884d 262031259B} out={241144i 20263d 33701049B}} Sun Aug 24 21:52:22 CDT 2014 currentTime=20140825T025218.372000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288191i 13885d 262033034B} out={241145i 20263d 33701100B}} Sun Aug 24 21:52:23 CDT 2014 currentTime=20140825T025223.664000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288200i 13886d 262034478B} out={241146i 20263d 33701151B}} Sun Aug 24 21:52:24 CDT 2014 currentTime=20140825T025223.664000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288212i 13887d 262036219B} out={241147i 20263d 33701202B}} Sun Aug 24 21:52:25 CDT 2014 currentTime=20140825T025223.664000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288227i 13888d 262038326B} out={241149i 20263d 33701350B}} Sun Aug 24 21:52:26 CDT 2014 currentTime=20140825T025223.664000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288234i 13889d 262039551B} out={241152i 20263d 33701503B}} Sun Aug 24 21:52:27 CDT 2014 currentTime=20140825T025223.664000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288244i 13891d 262041490B} out={241153i 20263d 33701554B}} Sun Aug 24 21:52:28 CDT 2014 currentTime=20140825T025228.966000 faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288249i 13892d 262042479B} out={241154i 20263d 33701605B}} From alexander.afanasyev at ucla.edu Sun Aug 24 20:04:36 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sun, 24 Aug 2014 20:04:36 -0700 Subject: [Nfd-dev] nfd-status currentTime only updates every 5 seconds In-Reply-To: <53FAA6AD.6010208@seas.wustl.edu> References: <53FAA6AD.6010208@seas.wustl.edu> Message-ID: <52FF34E0-82F8-45C4-910C-4987D53FEEEB@ucla.edu> This part is intentional and was put in order to restrict load on NFD to get stats data: data freshness is set to 5 seconds and requests will not reach NFD until freshness expires. If it is necessary, we can reconsider this or reduce freshness period. The same is not put to Face, FIB, RIB, datasets as they are volatile by nature. --- Alex On Aug 24, 2014, at 7:59 PM, John DeHart wrote: > > We are using the ouput from nfd-status-http-server to calculate the link bandwidths > for http://ndnmap.arl.wustl.edu/. > nfd-status-http-server uses nfd-status to collect its data. We just noticed that > The 'currentTime' field reported by nfd-status only updates about every 5 seconds. > It looks like the data associated with each face updates immediately, but the > currentTime reported does not. > > Is there a reason for this or is it a bug? > > I'll include below the output from a script looking at one face and currentTime > while doing an ndnping across that face so the data associated with the > face will be changing every second. I also print the 'date'. > > John > > > Sun Aug 24 21:52:13 CDT 2014 > currentTime=20140825T025213.076000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288096i 13876d 262018602B} out={241133i 20263d 33700396B}} > Sun Aug 24 21:52:14 CDT 2014 > currentTime=20140825T025213.076000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288106i 13877d 262020108B} out={241134i 20263d 33700447B}} > Sun Aug 24 21:52:15 CDT 2014 > currentTime=20140825T025213.076000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288117i 13878d 262021771B} out={241135i 20263d 33700498B}} > Sun Aug 24 21:52:16 CDT 2014 > currentTime=20140825T025213.076000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288131i 13879d 262023779B} out={241137i 20263d 33700646B}} > Sun Aug 24 21:52:17 CDT 2014 > currentTime=20140825T025213.076000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288140i 13880d 262025234B} out={241139i 20263d 33700748B}} > Sun Aug 24 21:52:18 CDT 2014 > currentTime=20140825T025218.372000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288149i 13881d 262026651B} out={241140i 20263d 33700799B}} > Sun Aug 24 21:52:19 CDT 2014 > currentTime=20140825T025218.372000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288155i 13882d 262027738B} out={241141i 20263d 33700850B}} > Sun Aug 24 21:52:20 CDT 2014 > currentTime=20140825T025218.372000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288166i 13883d 262029383B} out={241143i 20263d 33700998B}} > Sun Aug 24 21:52:21 CDT 2014 > currentTime=20140825T025218.372000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288179i 13884d 262031259B} out={241144i 20263d 33701049B}} > Sun Aug 24 21:52:22 CDT 2014 > currentTime=20140825T025218.372000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288191i 13885d 262033034B} out={241145i 20263d 33701100B}} > Sun Aug 24 21:52:23 CDT 2014 > currentTime=20140825T025223.664000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288200i 13886d 262034478B} out={241146i 20263d 33701151B}} > Sun Aug 24 21:52:24 CDT 2014 > currentTime=20140825T025223.664000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288212i 13887d 262036219B} out={241147i 20263d 33701202B}} > Sun Aug 24 21:52:25 CDT 2014 > currentTime=20140825T025223.664000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288227i 13888d 262038326B} out={241149i 20263d 33701350B}} > Sun Aug 24 21:52:26 CDT 2014 > currentTime=20140825T025223.664000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288234i 13889d 262039551B} out={241152i 20263d 33701503B}} > Sun Aug 24 21:52:27 CDT 2014 > currentTime=20140825T025223.664000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288244i 13891d 262041490B} out={241153i 20263d 33701554B}} > Sun Aug 24 21:52:28 CDT 2014 > currentTime=20140825T025228.966000 > faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288249i 13892d 262042479B} out={241154i 20263d 33701605B}} > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From jdd at seas.wustl.edu Sun Aug 24 20:31:51 2014 From: jdd at seas.wustl.edu (John DeHart) Date: Sun, 24 Aug 2014 22:31:51 -0500 Subject: [Nfd-dev] nfd-status currentTime only updates every 5 seconds In-Reply-To: <52FF34E0-82F8-45C4-910C-4987D53FEEEB@ucla.edu> References: <53FAA6AD.6010208@seas.wustl.edu> <52FF34E0-82F8-45C4-910C-4987D53FEEEB@ucla.edu> Message-ID: <53FAAE27.1000608@seas.wustl.edu> Alex, OK. I see how that makes sense. We'll take a look at generating the time in our daemon that collects the status and see if that will suffice. I think it probably will. Thanks, John On 8/24/14, 10:04 PM, Alex Afanasyev wrote: > This part is intentional and was put in order to restrict load on NFD to get stats data: data freshness is set to 5 seconds and requests will not reach NFD until freshness expires. If it is necessary, we can reconsider this or reduce freshness period. > > The same is not put to Face, FIB, RIB, datasets as they are volatile by nature. > > --- > Alex > > On Aug 24, 2014, at 7:59 PM, John DeHart wrote: > >> We are using the ouput from nfd-status-http-server to calculate the link bandwidths >> for http://ndnmap.arl.wustl.edu/. >> nfd-status-http-server uses nfd-status to collect its data. We just noticed that >> The 'currentTime' field reported by nfd-status only updates about every 5 seconds. >> It looks like the data associated with each face updates immediately, but the >> currentTime reported does not. >> >> Is there a reason for this or is it a bug? >> >> I'll include below the output from a script looking at one face and currentTime >> while doing an ndnping across that face so the data associated with the >> face will be changing every second. I also print the 'date'. >> >> John >> >> >> Sun Aug 24 21:52:13 CDT 2014 >> currentTime=20140825T025213.076000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288096i 13876d 262018602B} out={241133i 20263d 33700396B}} >> Sun Aug 24 21:52:14 CDT 2014 >> currentTime=20140825T025213.076000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288106i 13877d 262020108B} out={241134i 20263d 33700447B}} >> Sun Aug 24 21:52:15 CDT 2014 >> currentTime=20140825T025213.076000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288117i 13878d 262021771B} out={241135i 20263d 33700498B}} >> Sun Aug 24 21:52:16 CDT 2014 >> currentTime=20140825T025213.076000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288131i 13879d 262023779B} out={241137i 20263d 33700646B}} >> Sun Aug 24 21:52:17 CDT 2014 >> currentTime=20140825T025213.076000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288140i 13880d 262025234B} out={241139i 20263d 33700748B}} >> Sun Aug 24 21:52:18 CDT 2014 >> currentTime=20140825T025218.372000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288149i 13881d 262026651B} out={241140i 20263d 33700799B}} >> Sun Aug 24 21:52:19 CDT 2014 >> currentTime=20140825T025218.372000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288155i 13882d 262027738B} out={241141i 20263d 33700850B}} >> Sun Aug 24 21:52:20 CDT 2014 >> currentTime=20140825T025218.372000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288166i 13883d 262029383B} out={241143i 20263d 33700998B}} >> Sun Aug 24 21:52:21 CDT 2014 >> currentTime=20140825T025218.372000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288179i 13884d 262031259B} out={241144i 20263d 33701049B}} >> Sun Aug 24 21:52:22 CDT 2014 >> currentTime=20140825T025218.372000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288191i 13885d 262033034B} out={241145i 20263d 33701100B}} >> Sun Aug 24 21:52:23 CDT 2014 >> currentTime=20140825T025223.664000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288200i 13886d 262034478B} out={241146i 20263d 33701151B}} >> Sun Aug 24 21:52:24 CDT 2014 >> currentTime=20140825T025223.664000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288212i 13887d 262036219B} out={241147i 20263d 33701202B}} >> Sun Aug 24 21:52:25 CDT 2014 >> currentTime=20140825T025223.664000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288227i 13888d 262038326B} out={241149i 20263d 33701350B}} >> Sun Aug 24 21:52:26 CDT 2014 >> currentTime=20140825T025223.664000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288234i 13889d 262039551B} out={241152i 20263d 33701503B}} >> Sun Aug 24 21:52:27 CDT 2014 >> currentTime=20140825T025223.664000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288244i 13891d 262041490B} out={241153i 20263d 33701554B}} >> Sun Aug 24 21:52:28 CDT 2014 >> currentTime=20140825T025228.966000 >> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288249i 13892d 262042479B} out={241154i 20263d 33701605B}} >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From alexander.afanasyev at ucla.edu Sun Aug 24 20:42:41 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Sun, 24 Aug 2014 20:42:41 -0700 Subject: [Nfd-dev] nfd-status currentTime only updates every 5 seconds In-Reply-To: <53FAAE27.1000608@seas.wustl.edu> References: <53FAA6AD.6010208@seas.wustl.edu> <52FF34E0-82F8-45C4-910C-4987D53FEEEB@ucla.edu> <53FAAE27.1000608@seas.wustl.edu> Message-ID: <7C33FC63-B415-4EEF-ADEA-528D183CF762@ucla.edu> Btw. Do we have / can we have a system like cacti (http://www.cacti.net/) set up to collect historical volumes of traffic per link? --- Alex On Aug 24, 2014, at 8:31 PM, John DeHart wrote: > > Alex, > > OK. I see how that makes sense. > We'll take a look at generating the time in our daemon that collects the status > and see if that will suffice. I think it probably will. > > Thanks, > John > > On 8/24/14, 10:04 PM, Alex Afanasyev wrote: >> This part is intentional and was put in order to restrict load on NFD to get stats data: data freshness is set to 5 seconds and requests will not reach NFD until freshness expires. If it is necessary, we can reconsider this or reduce freshness period. >> >> The same is not put to Face, FIB, RIB, datasets as they are volatile by nature. >> >> --- >> Alex >> >> On Aug 24, 2014, at 7:59 PM, John DeHart wrote: >> >>> We are using the ouput from nfd-status-http-server to calculate the link bandwidths >>> for http://ndnmap.arl.wustl.edu/. >>> nfd-status-http-server uses nfd-status to collect its data. We just noticed that >>> The 'currentTime' field reported by nfd-status only updates about every 5 seconds. >>> It looks like the data associated with each face updates immediately, but the >>> currentTime reported does not. >>> >>> Is there a reason for this or is it a bug? >>> >>> I'll include below the output from a script looking at one face and currentTime >>> while doing an ndnping across that face so the data associated with the >>> face will be changing every second. I also print the 'date'. >>> >>> John >>> >>> >>> Sun Aug 24 21:52:13 CDT 2014 >>> currentTime=20140825T025213.076000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288096i 13876d 262018602B} out={241133i 20263d 33700396B}} >>> Sun Aug 24 21:52:14 CDT 2014 >>> currentTime=20140825T025213.076000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288106i 13877d 262020108B} out={241134i 20263d 33700447B}} >>> Sun Aug 24 21:52:15 CDT 2014 >>> currentTime=20140825T025213.076000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288117i 13878d 262021771B} out={241135i 20263d 33700498B}} >>> Sun Aug 24 21:52:16 CDT 2014 >>> currentTime=20140825T025213.076000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288131i 13879d 262023779B} out={241137i 20263d 33700646B}} >>> Sun Aug 24 21:52:17 CDT 2014 >>> currentTime=20140825T025213.076000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288140i 13880d 262025234B} out={241139i 20263d 33700748B}} >>> Sun Aug 24 21:52:18 CDT 2014 >>> currentTime=20140825T025218.372000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288149i 13881d 262026651B} out={241140i 20263d 33700799B}} >>> Sun Aug 24 21:52:19 CDT 2014 >>> currentTime=20140825T025218.372000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288155i 13882d 262027738B} out={241141i 20263d 33700850B}} >>> Sun Aug 24 21:52:20 CDT 2014 >>> currentTime=20140825T025218.372000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288166i 13883d 262029383B} out={241143i 20263d 33700998B}} >>> Sun Aug 24 21:52:21 CDT 2014 >>> currentTime=20140825T025218.372000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288179i 13884d 262031259B} out={241144i 20263d 33701049B}} >>> Sun Aug 24 21:52:22 CDT 2014 >>> currentTime=20140825T025218.372000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288191i 13885d 262033034B} out={241145i 20263d 33701100B}} >>> Sun Aug 24 21:52:23 CDT 2014 >>> currentTime=20140825T025223.664000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288200i 13886d 262034478B} out={241146i 20263d 33701151B}} >>> Sun Aug 24 21:52:24 CDT 2014 >>> currentTime=20140825T025223.664000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288212i 13887d 262036219B} out={241147i 20263d 33701202B}} >>> Sun Aug 24 21:52:25 CDT 2014 >>> currentTime=20140825T025223.664000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288227i 13888d 262038326B} out={241149i 20263d 33701350B}} >>> Sun Aug 24 21:52:26 CDT 2014 >>> currentTime=20140825T025223.664000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288234i 13889d 262039551B} out={241152i 20263d 33701503B}} >>> Sun Aug 24 21:52:27 CDT 2014 >>> currentTime=20140825T025223.664000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288244i 13891d 262041490B} out={241153i 20263d 33701554B}} >>> Sun Aug 24 21:52:28 CDT 2014 >>> currentTime=20140825T025228.966000 >>> faceid=6445 remote=udp4://128.196.203.36:6363 local=udp4://128.252.153.194:6363 counters={in={2288249i 13892d 262042479B} out={241154i 20263d 33701605B}} >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From alexander.afanasyev at ucla.edu Mon Aug 25 18:45:35 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Mon, 25 Aug 2014 18:45:35 -0700 Subject: [Nfd-dev] First public release of NDN Forwarding Daemon (NFD) Message-ID: Dear all, We are pleased to announce the initial public release (version 0.2.0) of the NDN Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other useful resources are available at NFD's official webpage (http://named-data.net/doc/NFD/0.2.0/). Also available is the NFD developer's guide (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides a detailed description of the implementation internals. An important goal of NFD is to support the broader community to experiment with the NDN architecture. Thus, the current release emphasizes **modularity** and **extensibility** over performance to allow easy experimentation with new protocol features, algorithms, data structures and applications. We invite all interested parties to experiment with the existing code and submit their contributions to NFD Redmine (http://redmine.named-data.net/projects/nfd) or directly to Gerrit Code Review (http://gerrit.named-data.net/#/both) in terms of new architecture features and performance improvements. *** More detailed information about the NFD release is available on the NDN website http://named-data.net/releases/NFD-0.2.0 *** This release is part of the new NDN Platform version 0.3 (http://named-data.net/releases/platform-0.3), which includes the following components: - The NDN Forwarding Daemon (NFD), version 0.2.0 http://named-data.net/doc/NFD/0.2.0/ - The ndn-cxx library, version 0.2.0 http://named-data.net/doc/ndn-cxx/0.2.0/ + The NDN C++ library with eXperimental eXtensions (CXX) + The ndnsec security tools to manage security identities and certificates - The NDN Common Client libraries suite (NDN-CCL), version 0.3 http://named-data.net/releases/CCL-0.3 + The NDN-CPP C++ library + The PyNDN2 Python library + The NDN-JS JavaScript library + (A Java library is coming soon) - The Named Data Link State Routing Protocol (NLSR), version 0.1.0 http://named-data.net/doc/NLSR/0.1.0/ - The next generation of NDN repository (repo-ng), version 0.1.0 http://github.com/named-data/repo-ng - A ping application for NDN (ndn-tlv-ping), version 0.2.0 http://github.com/named-data/ndn-tlv-ping - A traffic generator for NDN (ndn-traffic-generator), version 0.2.0 http://github.com/named-data/ndn-traffic-generator - A packet capture and analysis tool for NDN (ndndump), version 0.5 https://github.com/named-data/ndndump The NFD Team. From alexander.afanasyev at ucla.edu Mon Aug 25 18:53:36 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Mon, 25 Aug 2014 18:53:36 -0700 Subject: [Nfd-dev] First public release of NDN Forwarding Daemon (NFD) In-Reply-To: References: Message-ID: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> I have sent this announcement to ndn-interest at lists.cs.ucla.edu, ndnsim at lists.cs.ucla.edu, icnrg at irtf.org, and ccnx-users at ccnx.org mailing lists. Hopefully, it didn't get blocked... If you know other relevant mailing lists, please forward it there (but post the mailing list name here in advance, so we can avoid duplicate posts). --- Alex On Aug 25, 2014, at 6:45 PM, Alex Afanasyev wrote: > Dear all, > > We are pleased to announce the initial public release (version 0.2.0) of the NDN > Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data > Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other > useful resources are available at NFD's official webpage > (http://named-data.net/doc/NFD/0.2.0/). > > Also available is the NFD developer's guide > (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides > a detailed description of the implementation internals. > > An important goal of NFD is to support the broader community to experiment with the NDN > architecture. Thus, the current release emphasizes **modularity** and **extensibility** > over performance to allow easy experimentation with new protocol features, algorithms, > data structures and applications. We invite all interested parties to experiment with the > existing code and submit their contributions to NFD Redmine > (http://redmine.named-data.net/projects/nfd) or directly to Gerrit Code Review > (http://gerrit.named-data.net/#/both) in terms of new architecture features and > performance improvements. > > *** > More detailed information about the NFD release is available on the NDN website > http://named-data.net/releases/NFD-0.2.0 > *** > > This release is part of the new NDN Platform version 0.3 > (http://named-data.net/releases/platform-0.3), which includes the following components: > > - The NDN Forwarding Daemon (NFD), version 0.2.0 > http://named-data.net/doc/NFD/0.2.0/ > > - The ndn-cxx library, version 0.2.0 > http://named-data.net/doc/ndn-cxx/0.2.0/ > > + The NDN C++ library with eXperimental eXtensions (CXX) > + The ndnsec security tools to manage security identities and certificates > > - The NDN Common Client libraries suite (NDN-CCL), version 0.3 > http://named-data.net/releases/CCL-0.3 > > + The NDN-CPP C++ library > + The PyNDN2 Python library > + The NDN-JS JavaScript library > + (A Java library is coming soon) > > - The Named Data Link State Routing Protocol (NLSR), version 0.1.0 > http://named-data.net/doc/NLSR/0.1.0/ > > - The next generation of NDN repository (repo-ng), version 0.1.0 > http://github.com/named-data/repo-ng > > - A ping application for NDN (ndn-tlv-ping), version 0.2.0 > http://github.com/named-data/ndn-tlv-ping > > - A traffic generator for NDN (ndn-traffic-generator), version 0.2.0 > http://github.com/named-data/ndn-traffic-generator > > - A packet capture and analysis tool for NDN (ndndump), version 0.5 > https://github.com/named-data/ndndump > > The NFD Team. From bzhang at cs.arizona.edu Mon Aug 25 21:33:48 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Mon, 25 Aug 2014 21:33:48 -0700 Subject: [Nfd-dev] First public release of NDN Forwarding Daemon (NFD) In-Reply-To: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> References: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> Message-ID: I?ll forward it to hoticn at googlegroups.com list used by Chinese researchers. Beichuan On Aug 25, 2014, at 6:53 PM, Alex Afanasyev wrote: > I have sent this announcement to ndn-interest at lists.cs.ucla.edu, ndnsim at lists.cs.ucla.edu, icnrg at irtf.org, and ccnx-users at ccnx.org mailing lists. Hopefully, it didn't get blocked... > > If you know other relevant mailing lists, please forward it there (but post the mailing list name here in advance, so we can avoid duplicate posts). > > --- > Alex > > On Aug 25, 2014, at 6:45 PM, Alex Afanasyev wrote: > >> Dear all, >> >> We are pleased to announce the initial public release (version 0.2.0) of the NDN >> Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data >> Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other >> useful resources are available at NFD's official webpage >> (http://named-data.net/doc/NFD/0.2.0/). >> >> Also available is the NFD developer's guide >> (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides >> a detailed description of the implementation internals. >> >> An important goal of NFD is to support the broader community to experiment with the NDN >> architecture. Thus, the current release emphasizes **modularity** and **extensibility** >> over performance to allow easy experimentation with new protocol features, algorithms, >> data structures and applications. We invite all interested parties to experiment with the >> existing code and submit their contributions to NFD Redmine >> (http://redmine.named-data.net/projects/nfd) or directly to Gerrit Code Review >> (http://gerrit.named-data.net/#/both) in terms of new architecture features and >> performance improvements. >> >> *** >> More detailed information about the NFD release is available on the NDN website >> http://named-data.net/releases/NFD-0.2.0 >> *** >> >> This release is part of the new NDN Platform version 0.3 >> (http://named-data.net/releases/platform-0.3), which includes the following components: >> >> - The NDN Forwarding Daemon (NFD), version 0.2.0 >> http://named-data.net/doc/NFD/0.2.0/ >> >> - The ndn-cxx library, version 0.2.0 >> http://named-data.net/doc/ndn-cxx/0.2.0/ >> >> + The NDN C++ library with eXperimental eXtensions (CXX) >> + The ndnsec security tools to manage security identities and certificates >> >> - The NDN Common Client libraries suite (NDN-CCL), version 0.3 >> http://named-data.net/releases/CCL-0.3 >> >> + The NDN-CPP C++ library >> + The PyNDN2 Python library >> + The NDN-JS JavaScript library >> + (A Java library is coming soon) >> >> - The Named Data Link State Routing Protocol (NLSR), version 0.1.0 >> http://named-data.net/doc/NLSR/0.1.0/ >> >> - The next generation of NDN repository (repo-ng), version 0.1.0 >> http://github.com/named-data/repo-ng >> >> - A ping application for NDN (ndn-tlv-ping), version 0.2.0 >> http://github.com/named-data/ndn-tlv-ping >> >> - A traffic generator for NDN (ndn-traffic-generator), version 0.2.0 >> http://github.com/named-data/ndn-traffic-generator >> >> - A packet capture and analysis tool for NDN (ndndump), version 0.5 >> https://github.com/named-data/ndndump >> >> The NFD Team. > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From lanwang at memphis.edu Tue Aug 26 08:56:14 2014 From: lanwang at memphis.edu (Lan Wang (lanwang)) Date: Tue, 26 Aug 2014 15:56:14 +0000 Subject: [Nfd-dev] First public release of NDN Forwarding Daemon (NFD) In-Reply-To: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> References: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> Message-ID: <4A9FD23E-63B3-49E0-BDBA-C071AD099A92@memphis.edu> Can you send to the list ndn at lists.cs.ucla.edu? Some of my students didn't get the email. I think they are only on the ndn mailing list. Also if NDNcomm has a mailing list, this email can be forwarded there. Lan On Aug 25, 2014, at 8:53 PM, Alex Afanasyev wrote: > I have sent this announcement to ndn-interest at lists.cs.ucla.edu, ndnsim at lists.cs.ucla.edu, icnrg at irtf.org, and ccnx-users at ccnx.org mailing lists. Hopefully, it didn't get blocked... > > If you know other relevant mailing lists, please forward it there (but post the mailing list name here in advance, so we can avoid duplicate posts). > > --- > Alex > > On Aug 25, 2014, at 6:45 PM, Alex Afanasyev wrote: > >> Dear all, >> >> We are pleased to announce the initial public release (version 0.2.0) of the NDN >> Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data >> Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other >> useful resources are available at NFD's official webpage >> (http://named-data.net/doc/NFD/0.2.0/). >> >> Also available is the NFD developer's guide >> (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides >> a detailed description of the implementation internals. >> >> An important goal of NFD is to support the broader community to experiment with the NDN >> architecture. Thus, the current release emphasizes **modularity** and **extensibility** >> over performance to allow easy experimentation with new protocol features, algorithms, >> data structures and applications. We invite all interested parties to experiment with the >> existing code and submit their contributions to NFD Redmine >> (http://redmine.named-data.net/projects/nfd) or directly to Gerrit Code Review >> (http://gerrit.named-data.net/#/both) in terms of new architecture features and >> performance improvements. >> >> *** >> More detailed information about the NFD release is available on the NDN website >> http://named-data.net/releases/NFD-0.2.0 >> *** >> >> This release is part of the new NDN Platform version 0.3 >> (http://named-data.net/releases/platform-0.3), which includes the following components: >> >> - The NDN Forwarding Daemon (NFD), version 0.2.0 >> http://named-data.net/doc/NFD/0.2.0/ >> >> - The ndn-cxx library, version 0.2.0 >> http://named-data.net/doc/ndn-cxx/0.2.0/ >> >> + The NDN C++ library with eXperimental eXtensions (CXX) >> + The ndnsec security tools to manage security identities and certificates >> >> - The NDN Common Client libraries suite (NDN-CCL), version 0.3 >> http://named-data.net/releases/CCL-0.3 >> >> + The NDN-CPP C++ library >> + The PyNDN2 Python library >> + The NDN-JS JavaScript library >> + (A Java library is coming soon) >> >> - The Named Data Link State Routing Protocol (NLSR), version 0.1.0 >> http://named-data.net/doc/NLSR/0.1.0/ >> >> - The next generation of NDN repository (repo-ng), version 0.1.0 >> http://github.com/named-data/repo-ng >> >> - A ping application for NDN (ndn-tlv-ping), version 0.2.0 >> http://github.com/named-data/ndn-tlv-ping >> >> - A traffic generator for NDN (ndn-traffic-generator), version 0.2.0 >> http://github.com/named-data/ndn-traffic-generator >> >> - A packet capture and analysis tool for NDN (ndndump), version 0.5 >> https://github.com/named-data/ndndump >> >> The NFD Team. > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev From alexander.afanasyev at ucla.edu Tue Aug 26 11:26:29 2014 From: alexander.afanasyev at ucla.edu (Alex Afanasyev) Date: Tue, 26 Aug 2014 11:26:29 -0700 Subject: [Nfd-dev] First public release of NDN Forwarding Daemon (NFD) In-Reply-To: <4A9FD23E-63B3-49E0-BDBA-C071AD099A92@memphis.edu> References: <81430C16-C6D9-442D-80FD-C1FBA743DCB7@ucla.edu> <4A9FD23E-63B3-49E0-BDBA-C071AD099A92@memphis.edu> Message-ID: Sure, I will forward the announcement to ndn@ list in 10 mins. -- Alex On Aug 26, 2014, at 8:56 AM, Lan Wang (lanwang) wrote: > Can you send to the list ndn at lists.cs.ucla.edu? Some of my students didn't get the email. I think they are only on the ndn mailing list. > > Also if NDNcomm has a mailing list, this email can be forwarded there. > > Lan > On Aug 25, 2014, at 8:53 PM, Alex Afanasyev > wrote: > >> I have sent this announcement to ndn-interest at lists.cs.ucla.edu, ndnsim at lists.cs.ucla.edu, icnrg at irtf.org, and ccnx-users at ccnx.org mailing lists. Hopefully, it didn't get blocked... >> >> If you know other relevant mailing lists, please forward it there (but post the mailing list name here in advance, so we can avoid duplicate posts). >> >> --- >> Alex >> >> On Aug 25, 2014, at 6:45 PM, Alex Afanasyev wrote: >> >>> Dear all, >>> >>> We are pleased to announce the initial public release (version 0.2.0) of the NDN >>> Forwarding Daemon (NFD). NFD is a network forwarder that implements the Named Data >>> Networking (NDN) protocol. More details about NFD, release notes, HOWTOs, a FAQ and other >>> useful resources are available at NFD's official webpage >>> (http://named-data.net/doc/NFD/0.2.0/). >>> >>> Also available is the NFD developer's guide >>> (http://named-data.net/wp-content/uploads/2014/07/NFD-developer-guide.pdf), which provides >>> a detailed description of the implementation internals. >>> >>> An important goal of NFD is to support the broader community to experiment with the NDN >>> architecture. Thus, the current release emphasizes **modularity** and **extensibility** >>> over performance to allow easy experimentation with new protocol features, algorithms, >>> data structures and applications. We invite all interested parties to experiment with the >>> existing code and submit their contributions to NFD Redmine >>> (http://redmine.named-data.net/projects/nfd) or directly to Gerrit Code Review >>> (http://gerrit.named-data.net/#/both) in terms of new architecture features and >>> performance improvements. >>> >>> *** >>> More detailed information about the NFD release is available on the NDN website >>> http://named-data.net/releases/NFD-0.2.0 >>> *** >>> >>> This release is part of the new NDN Platform version 0.3 >>> (http://named-data.net/releases/platform-0.3), which includes the following components: >>> >>> - The NDN Forwarding Daemon (NFD), version 0.2.0 >>> http://named-data.net/doc/NFD/0.2.0/ >>> >>> - The ndn-cxx library, version 0.2.0 >>> http://named-data.net/doc/ndn-cxx/0.2.0/ >>> >>> + The NDN C++ library with eXperimental eXtensions (CXX) >>> + The ndnsec security tools to manage security identities and certificates >>> >>> - The NDN Common Client libraries suite (NDN-CCL), version 0.3 >>> http://named-data.net/releases/CCL-0.3 >>> >>> + The NDN-CPP C++ library >>> + The PyNDN2 Python library >>> + The NDN-JS JavaScript library >>> + (A Java library is coming soon) >>> >>> - The Named Data Link State Routing Protocol (NLSR), version 0.1.0 >>> http://named-data.net/doc/NLSR/0.1.0/ >>> >>> - The next generation of NDN repository (repo-ng), version 0.1.0 >>> http://github.com/named-data/repo-ng >>> >>> - A ping application for NDN (ndn-tlv-ping), version 0.2.0 >>> http://github.com/named-data/ndn-tlv-ping >>> >>> - A traffic generator for NDN (ndn-traffic-generator), version 0.2.0 >>> http://github.com/named-data/ndn-traffic-generator >>> >>> - A packet capture and analysis tool for NDN (ndndump), version 0.5 >>> https://github.com/named-data/ndndump >>> >>> The NFD Team. >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > From shijunxiao at email.arizona.edu Wed Aug 27 09:42:56 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Wed, 27 Aug 2014 09:42:56 -0700 Subject: [Nfd-dev] delay with ndnpingserver In-Reply-To: References: Message-ID: Hi Obaid This is what I have from ndn6.tk to UCLA: --- 2607:f010:3f9::11:0 ping statistics --- 20 packets transmitted, 20 received, 0% packet loss, time 19019ms rtt min/avg/max/mdev = 141.671/142.007/143.969/0.672 ms === Ping Statistics For /ndn/edu/ucla === Sent=20, Received=20, Packet Loss=0%, Total Time=3115.14 ms Round Trip Time (Min/Max/Avg/MDev) = (150.608/186.74/155.757/7.46211) ms On average, ndnping is 12ms slower than ICMP ping. A few factors contribute to this fact: - kernel vs userspace - ICMP ping is responded by OS kernel. - ndnping is handled by at least two userspace processes on each end. - signing - ICMP ping reply is a simple unsigned packet. - ndnpingserver needs to sign the reply Data with RSA. Signing is slow , and we can't switch to SHA256 digest now. - caching - ICMP ping reply is not cached. - NFD has in-network cache. ContentStore implementation is inefficient . ndnping variance is 7ms higher than ICMP ping, especially on the first request. This is mainly due to the behavior of NCC strategy. Yours, Junxiao On Tue, Aug 26, 2014 at 11:51 PM, Syed Obaid Amin wrote: > Hello All, > > I am observing that ndnping is taking much time in processing the packets. > For e.g. have a look at the following traces. There is approx 100ms > difference between a regular ping and ndnping (besides the first packet). > > oamin at arizona:~/exp-caidatopo-nfd/scripts$ ping memphis > PING memphis-lan0 (10.1.1.5) 56(84) bytes of data. > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=1 ttl=64 time=65.7 ms > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=2 ttl=64 time=64.8 ms > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=3 ttl=64 time=65.4 ms > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=4 ttl=64 time=67.6 ms > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=5 ttl=64 time=67.9 ms > 64 bytes from memphis-lan0 (10.1.1.5): icmp_req=6 ttl=64 time=69.0 ms > > oamin at arizona:~/exp-caidatopo-nfd/scripts$ ndnping /ndn/memphis/name1 > > === Pinging /ndn/memphis/name1 === > > Content From /ndn/memphis/name1 - Ping Reference = 1260650668 - Round > Trip Time = 232.614 ms > Content From /ndn/memphis/name1 - Ping Reference = 1260650669 - Round > Trip Time = 150.735 ms > Content From /ndn/memphis/name1 - Ping Reference = 1260650670 - Round > Trip Time = 151.064 ms > Content From /ndn/memphis/name1 - Ping Reference = 1260650671 - Round > Trip Time = 147.209 ms > Content From /ndn/memphis/name1 - Ping Reference = 1260650672 - Round > Trip Time = 151.125 ms > Content From /ndn/memphis/name1 - Ping Reference = 1260650673 - Round > Trip Time = 161.333 ms > > I checked the logs and it looks like sometime ndnpingserver after > receiving an Interest takes much time to generate a response back. > > Secondly, the RTT in ndnping seems to have high variance. For e.g. look at > the 1st and 4th packet. > > Any idea what's going on here? @John, are you experiencing the same? > > Regards, > Obaid > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Thu Aug 28 16:12:35 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Thu, 28 Aug 2014 18:12:35 -0500 Subject: [Nfd-dev] Details about available forwarding strategies Message-ID: Hello All, >From where I can get the details about the available forwarding strategies in NFD. On wiki I can see the available choices but couldn't find details and differences. Moreover, I am observing that best-route strategy considers only one best face for forwarding packets? Is this the desired behavior? Thanks, Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Aug 28 16:15:44 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 28 Aug 2014 16:15:44 -0700 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: Message-ID: Hi Obaid High level descriptions about strategies are in Developer Guide. Additional details are in header files of strategies (and thus appears in Doxygen). It's correct for best-route to consider only one nexthop. Yours, Junxiao On Aug 28, 2014 4:13 PM, "Syed Obaid Amin" wrote: > Hello All, > > From where I can get the details about the available forwarding strategies > in NFD. On wiki I can see the available choices but couldn't find details > and differences. Moreover, I am observing that best-route strategy > considers only one best face for forwarding packets? Is this the desired > behavior? > > Thanks, > Obaid > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at cs.arizona.edu Thu Aug 28 16:40:04 2014 From: bzhang at cs.arizona.edu (Beichuan Zhang) Date: Thu, 28 Aug 2014 16:40:04 -0700 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: Message-ID: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> On Aug 28, 2014, at 4:15 PM, Junxiao Shi wrote: > Hi Obaid > > High level descriptions about strategies are in Developer Guide. > Additional details are in header files of strategies (and thus appears in Doxygen). > > It's correct for best-route to consider only one nexthop > > current version of best-route uses one nexthop at a time. an interest arriving later may use another nexthop if the previous one didn?t get data back. > Yours, Junxiao > > On Aug 28, 2014 4:13 PM, "Syed Obaid Amin" wrote: > Hello All, > > From where I can get the details about the available forwarding strategies in NFD. On wiki I can see the available choices but couldn't find details and differences. Moreover, I am observing that best-route strategy considers only one best face for forwarding packets? Is this the desired behavior? > > Thanks, > Obaid > > > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Thu Aug 28 16:48:42 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Thu, 28 Aug 2014 18:48:42 -0500 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: Got it! Thanks Beichuan and Junxiao. On Thu, Aug 28, 2014 at 6:40 PM, Beichuan Zhang wrote: > > On Aug 28, 2014, at 4:15 PM, Junxiao Shi > wrote: > > Hi Obaid > > High level descriptions about strategies are in Developer Guide. > Additional details are in header files of strategies (and thus appears in > Doxygen). > > It's correct for best-route to consider only one nexthop > > > current version of best-route uses one nexthop at a time. an interest > arriving later may use another nexthop if the previous one didn?t get data > back. > > > Yours, Junxiao > On Aug 28, 2014 4:13 PM, "Syed Obaid Amin" wrote: > >> Hello All, >> >> From where I can get the details about the available forwarding >> strategies in NFD. On wiki I can see the available choices but couldn't >> find details and differences. Moreover, I am observing that best-route >> strategy considers only one best face for forwarding packets? Is this the >> desired behavior? >> >> Thanks, >> Obaid >> >> >> >> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Thu Aug 28 17:57:01 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Thu, 28 Aug 2014 19:57:01 -0500 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: Is there a way to specify the default strategy in nfd.conf? I can only change it using nfdc right now. On Thu, Aug 28, 2014 at 6:48 PM, Syed Obaid Amin wrote: > Got it! Thanks Beichuan and Junxiao. > > > > > > On Thu, Aug 28, 2014 at 6:40 PM, Beichuan Zhang > wrote: > >> >> On Aug 28, 2014, at 4:15 PM, Junxiao Shi >> wrote: >> >> Hi Obaid >> >> High level descriptions about strategies are in Developer Guide. >> Additional details are in header files of strategies (and thus appears in >> Doxygen). >> >> It's correct for best-route to consider only one nexthop >> >> >> current version of best-route uses one nexthop at a time. an interest >> arriving later may use another nexthop if the previous one didn?t get data >> back. >> >> >> Yours, Junxiao >> On Aug 28, 2014 4:13 PM, "Syed Obaid Amin" wrote: >> >>> Hello All, >>> >>> From where I can get the details about the available forwarding >>> strategies in NFD. On wiki I can see the available choices but couldn't >>> find details and differences. Moreover, I am observing that best-route >>> strategy considers only one best face for forwarding packets? Is this the >>> desired behavior? >>> >>> Thanks, >>> Obaid >>> >>> >>> >>> _______________________________________________ >>> Nfd-dev mailing list >>> Nfd-dev at lists.cs.ucla.edu >>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>> >>> _______________________________________________ >> Nfd-dev mailing list >> Nfd-dev at lists.cs.ucla.edu >> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Aug 28 18:25:27 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 28 Aug 2014 18:25:27 -0700 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: Hi Obaid nfd.conf is designed to contain only those settings that are unlikely to change during runtime. Therefore, strategy choice, face, and RIB are only configurable through nfdc or its underlying protocol. You can write a bash script that invokes nfd-start and then executes initialization commands such as adding face and setting strategy. Yours, Junxiao On Aug 28, 2014 5:57 PM, "Syed Obaid Amin" wrote: > > Is there a way to specify the default strategy in nfd.conf? I can only change it using nfdc right now. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obaidasyed at gmail.com Thu Aug 28 18:34:34 2014 From: obaidasyed at gmail.com (Syed Obaid Amin) Date: Thu, 28 Aug 2014 20:34:34 -0500 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: On Thu, Aug 28, 2014 at 8:25 PM, Junxiao Shi wrote: > Hi Obaid > > nfd.conf is designed to contain only those settings that are unlikely to > change during runtime. > Therefore, strategy choice, face, and RIB are only configurable through > nfdc or its underlying protocol. > > You can write a bash script that invokes nfd-start and then executes > initialization commands such as adding face and setting strategy. > Yes, thats what I am doing right now but found the following in nfd.conf.sample so thought that there might be a way to specify it in the conf file as well. ; The tables section configures the CS, PIT, FIB, Strategy Choice, and Measurements tables { ; ContentStore size limit in number of packets ; default is 65536, about 500MB with 8KB packet size cs_max_packets 65536 } Thanks, Obaid > Yours, Junxiao > > On Aug 28, 2014 5:57 PM, "Syed Obaid Amin" wrote: > > > > Is there a way to specify the default strategy in nfd.conf? I can only > change it using nfdc right now. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shijunxiao at email.arizona.edu Thu Aug 28 18:38:14 2014 From: shijunxiao at email.arizona.edu (Junxiao Shi) Date: Thu, 28 Aug 2014 18:38:14 -0700 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: Hi Obaid tables section is intended to configure the *characteristics* of all tables (eg. capacity, replacement policy, storage structure), not the *contents* in those tables. Setting a strategy inserts an entry into the Strategy Choice table. This is an operation that affects the content in that table, which is not covered by tables section. Yours, Junxiao On Aug 28, 2014 6:34 PM, "Syed Obaid Amin" wrote: > > > On Thu, Aug 28, 2014 at 8:25 PM, Junxiao Shi > wrote: > >> Hi Obaid >> >> nfd.conf is designed to contain only those settings that are unlikely to >> change during runtime. >> Therefore, strategy choice, face, and RIB are only configurable through >> nfdc or its underlying protocol. >> >> You can write a bash script that invokes nfd-start and then executes >> initialization commands such as adding face and setting strategy. >> > Yes, thats what I am doing right now but found the following in > nfd.conf.sample so thought that there might be a way to specify it in the > conf file as well. > > ; The tables section configures the CS, PIT, FIB, Strategy Choice, and > Measurements > tables > { > ; ContentStore size limit in number of packets > ; default is 65536, about 500MB with 8KB packet size > cs_max_packets 65536 > } > > Thanks, > Obaid > >> Yours, Junxiao >> >> On Aug 28, 2014 5:57 PM, "Syed Obaid Amin" wrote: >> > >> > Is there a way to specify the default strategy in nfd.conf? I can only >> change it using nfdc right now. >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cawka1 at gmail.com Thu Aug 28 18:17:29 2014 From: cawka1 at gmail.com (Alex Afanasyev) Date: Thu, 28 Aug 2014 18:17:29 -0700 Subject: [Nfd-dev] Details about available forwarding strategies In-Reply-To: References: <4096D813-3CD6-4851-8E92-D7AD2771CD8E@cs.arizona.edu> Message-ID: <332D13F8-0A50-463E-AC80-BF995CDD2DDC@gmail.com> With Linux packages, you can put initial configuration nfdc commands into /etc/ndn/nfd-init.sh script. --- Alex > On Aug 28, 2014, at 5:57 PM, Syed Obaid Amin wrote: > > Is there a way to specify the default strategy in nfd.conf? I can only change it using nfdc right now. > > > > >> On Thu, Aug 28, 2014 at 6:48 PM, Syed Obaid Amin wrote: >> Got it! Thanks Beichuan and Junxiao. >> >> >> >> >> >>> On Thu, Aug 28, 2014 at 6:40 PM, Beichuan Zhang wrote: >>> >>>> On Aug 28, 2014, at 4:15 PM, Junxiao Shi wrote: >>>> >>>> Hi Obaid >>>> >>>> High level descriptions about strategies are in Developer Guide. >>>> Additional details are in header files of strategies (and thus appears in Doxygen). >>>> >>>> It's correct for best-route to consider only one nexthop >>>> >>> >>> current version of best-route uses one nexthop at a time. an interest arriving later may use another nexthop if the previous one didn?t get data back. >>> >>> >>>> Yours, Junxiao >>>> >>>>> On Aug 28, 2014 4:13 PM, "Syed Obaid Amin" wrote: >>>>> Hello All, >>>>> >>>>> From where I can get the details about the available forwarding strategies in NFD. On wiki I can see the available choices but couldn't find details and differences. Moreover, I am observing that best-route strategy considers only one best face for forwarding packets? Is this the desired behavior? >>>>> >>>>> Thanks, >>>>> Obaid >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Nfd-dev mailing list >>>>> Nfd-dev at lists.cs.ucla.edu >>>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev >>>> _______________________________________________ >>>> Nfd-dev mailing list >>>> Nfd-dev at lists.cs.ucla.edu >>>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev > > _______________________________________________ > Nfd-dev mailing list > Nfd-dev at lists.cs.ucla.edu > http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: