[ndnSIM] Fwd: Memory and Time Consumption - ndnSIM(v1) Simulations

Andriana Ioannou ioannoa at tcd.ie
Thu Jun 20 05:47:05 PDT 2019


Hi Junxiao,

Many thanks for your prompt reply. I already had a look at the costs page
re the services provided on Google Cloud - listed below - and the machine
that you suggested. Considering that the time and memory consumption
mentioned refer to one experiment alone and I have a set of parameters to
twist, the cost is unfortunately very high to follow such an option.
Nonetheless, it is very helpful to know that this time and memory
consumption is something to be expected given the specs of the ndnSIM
simulator. I am already aware that ndnSIM v2 requires more memory and that
was the reason why I rejected using it in the first place.

At this point I was just wondering whether the NDN research group has come
up with some kind of furmula that would allow experimenters to estimate the
memory and time consumption to be expected in their experiments.

Kind regards,
Andriana.

https://cloud.google.com/compute/pricing

On Thu, 20 Jun 2019 at 15:04, Junxiao Shi <shijunxiao at email.arizona.edu>
wrote:

> Hi Andriana
>
> ns-3's virtual payload feature means memory consumption of the experiment
> is independent from packet size ("chunk size 10KB" in your experiment).
> Everything else would still consume memory. 750 bytes per CS entry is
> reasonable.
> You have 155 nodes each having 100000 cache entries. That's 11GB memory if
> all entries are occupied.
>
> Scaling to 10000000 cache entries, it needs 1100GB memory. Google Cloud
> offers 1922GB memory in their n1-ultramem-80 machine type, which would be
> sufficient.
>
> Experiment duration is mainly dependent on traffic volume, i.e. number of
> packets transmitted. ns-3 supports MPI distributed scheduler
> https://www.nsnam.org/docs/models/html/distributed.html . You could use
> it to speed up experiment. If you are able to distribute the experiment
> to all 80 CPUs on the n1-ultramem-80, it'll be 40~60x faster than running
> on a single CPU.
>
> ndnSIM v2 does not support virtual payload, and each 10KB entry is likely
> to consume 30KB memory. The same experiment would consume 443GB memory in
> ndnSIM v2. With 10000000 cache entries it'll be 43TB and you can't find a
> machine with that much memory. But with MPI, you can distribute the
> workload across multiple physical machines. It'll quite difficult to setup,
> but not impossible.
>
> Yours, Junxiao
>
> On Thu, Jun 20, 2019, 07:16 Andriana Ioannou <ioannoa at tcd.ie> wrote:
>
>> Dear all,
>>
>> I have recently been running a number of experiments to support my
>> research. However, I have found myself facing some issues with regard to
>> both memory and time consumption. My experiments' setup may be summarised
>> as follows:
>>
>> Catalog Size:                                      1000000 & 100000000
>> objects
>> Network Topologies:                           Tiscali AS-3257 & Excodus
>> AS-3967 (75 & 97 routers / 56 & 58 gateways, respectively)
>> Object Size:                                        1000 chunks on
>> average
>> Chunk Size:                                        10KB (10240 virtual
>> payload)
>> Cache Size:                                        100000 chunks, 1000000
>> chunks, 10000000 chunks
>> Number of Producers:                        1
>> Number of Consumers:                      150 on average installed at
>> each gateway
>> Popularity Distribution:                        Weibull Distribution
>> Interests Window:                               1
>> ΔΤ between subsequent Interests:     0ms
>>
>> Based on the aforementioned setup, I am only able to run experiments when
>> my cache size is equal to 100000 chunks (the smallest). The memory
>> consumption for this scenario is about 10GBs - for as long as I have
>> recorded. The time consumption may vary depending on how many cores I am
>> using to run the experiments and how fast my CPU is, from 8 hours to 10
>> hours or so. At this point I should state that these numbers refer to code
>> using MPIs.
>>
>> Consulting the documentation on ndnSIM simulator, these numbers look
>> reasonable to me since eg a single CS entry in ndnSIM v1 may take about
>> 0.75KBs, while taking into account the memory consumption of the rest
>> architectural components. I do find that the time consumption is related to
>> the fact that a large number of Interests is generated while no time
>> interval has been applied between them.
>>
>> So, my first question is, does this look reasonable to you too? Is this
>> something that is indeed to be expected?
>>
>> And my second question is if so, is there a way to avoid this and run
>> larger experiments?
>>
>> Kind regards,
>> Andriana.
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/ndnsim/attachments/20190620/a701baf2/attachment-0001.html>


More information about the ndnSIM mailing list