[Ndn-interest] Repo vs aggregation queries

Junxiao Shi shijunxiao at email.arizona.edu
Fri Jul 19 09:21:01 PDT 2019


Dear folks

On the other hand, I could have a separate "database" application answering
>> aggregation queries. In this case, the database application could easily
>> provide individual data points as well. Then, is there any value to still
>> have the repo, and store every data point twice?
>>
>>
>> I would not add aggregation operators to to the repo.  I would, however,
>> have an aggregation app serving a slightly different namespace, that was
>> able to locally access the data in the repo and provide the aggregated
>> result to any clients requesting it.
>>
>
> Would this "aggregation app" be a general purpose utility, or does it have
> to be tailored to each application such as building sensing?
> If it's general purpose, how could the app understand the specific naming
> and encoding format of building sensing protocol?
>
>
> I would expect it to be designed in concert with the data logging
> application -- it could start off being single purpose but you might find
> that it generalizes.  In the same way that a SQL query has to know the
> naming and encoding of data in tables.
>

I'm smelling Named Function Networking <http://www.named-function.net/>
here: each data logging application could design its aggregation feature as
a named function, that is executed on the same node as the repo.


>
>
>  That's going to be local, not network, I/O.  The aggregated response
>> might be cached, possibly in the same repo as the raw measurements.  It
>> doesn't require storing the data twice.
>>
>
> Is this "aggregation app" accessing the data directly on the disk, or does
> it have to send Interests to the repo via a local socket?
>
>
> If using disk access, what's the benefit of having it as a separate app?
>
> If using Interest-Data exchanges, even if the packets are local, this
> still has huge overhead (encoding, forwarding, processing, etc) compared to
> a SQL query on the database.
>
>
> This is a pretty raw view of my reasoning, having thought about the
> problem for all of 10 minutes:
>
> I'd design it as Interest-Data exchanges to start with, then I'd measure
> the system performance to see if it was acceptable and if it's scaling
> properties met my requirements, and if it wasn't performing/scaling
> reasonably then I'd look at where the problem was and design a solution
> that addressed the problem.  I am a fan of optimizing
> implementation/architecture based on actual measurement -- though of course
> one's choices should be informed by theoretical complexity issues... but
> the constants matter too!
>
> I don't immediately come to the same conclusion as you do about a SQL
> query vs an application such as I'm describing.
>
> Remember that the repo (at least the one I worked on & with) stores
> everything in wire-format packets.  It happened to use B-trees with pages
> of comparable size to a disk page so the I/O performance was good, there
> were many (if they were small) content packets in a B-tree block and there
> was caching of the B-tree blocks so the overhead for reading multiple
> sequential items was minimized.
>

The B-tree appears to refer to CCNx 0.7.2 repo.
NDN's repo implementations are worse: all three known implementations use
some form of database (SQLite, PosgreSQL, or RocksDB).

In either case, the main difference between SQL query and Interest-Data
exchanges is that, the SQL server has access to a table index that is
updated automatically when a row is inserted.
The example I gave in my first message is "what's the maximum temperature
among all the data points collected in a set of rooms within a time period".
Suppose there was an index over timestamp-room-temperature columns, the SQL
server can answer this question by reading just the index and a small
number of rows (I know it's a very specialized index, but such index was
mandatory during the early days of Google App Engine and can be created
automatically with their development tools).
On the other hand, the repo has to pass every data point from these rooms
in the requested time period to the aggregation app, incurring a much
larger overhead.
Thus, the key of the issue is: SQL server can have index over columns
needed for aggregation, while repo + aggregation app cannot add arbitrary
index.


> The only *encoding* operation should be on the aggregation results.
>

I should have said "decoding".


> All of the forwarding operations should be in-memory.  I doubt that you
> can get zero-copy from the in-memory repo packet through to the aggregation
> application's buffer, but it shouldn't be massively bad.
>
> There are analogous operations in both the repo and SQL cases -- SQL is
> going to be interpreting a table schema to drive accesses to table data
> stored on disk (and cached in memory) and decoding and applying the
> operations from the query etc. etc.  For both SQL and stored content
> objects you'll be making data representation choices that affect the speed
> of the operations you'll be doing (e.g., storing measurements as text or
> binary values).
>
> People have cared for some time that SQL databases had good performance...
> so a lot of time has been spent optimizing them.
> Nobody has spent a lot of time optimizing repo query tools, and the
> supporting NDN components, but I think it would have a good payoff for many
> applications if someone did.
>

Why should we spend time on optimizing "repo query tools", instead of
reusing existing efforts of good enterprise-grade SQL databases?




>From this discussion it almost seems like these repos should act as NDN
> adapters to existing storage and grid storage solutions providing a basic
> but extensible naming schema.  Of course developing that naming schema and
> mapping can be complex.  Lots of new storage solutions like redis.io are
> making querying language simpler and are used in enterprise systems today.
> Redis is used extensively by Discord for instance.
>

This describes an NDN frontend of a database, where the data schema is
defined per application. I can certainly have a BLOB column that stores the
original Data packets coming from the sensor, in addition to individual
columns of temperature, humidity, etc, so that it can serve both original
Data packets and aggregation queries.
However, this would not be a repo, because a repo is supposedly general
purpose and can store data from any application (as permitted by trust
policy).

Yours, Junxiao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/ndn-interest/attachments/20190719/603ae873/attachment.html>


More information about the Ndn-interest mailing list