-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
It looks like we can get an important latency boost if we can use HTTP2. For example, this is the output of the attached script large-dataset-indexing.py:
Dataset size: 3814.697 MB
Time: 118 ms
<class 'blosc2.c2array.C2Array'> (1000, 1000, 1000) int32
[[500302900 500302901 500302902 500302903 500302904]
[501302900 501302901 501302902 501302903 501302904]]
Time: 121 ms
Time: 178 ms
<class 'caterva2.client.Dataset'> (1000, 1000, 1000) int32
[[500302900 500302901 500302902 500302903 500302904]
[501302900 501302901 501302902 501302903 501302904]]
Time: 165 ms
However, when using a notebook inside the browser (e.g. https://cat2.cloud/demo/api/download/@public/examples/large-dataset-indexing.ipynb or https://cat2.cloud/demo/static/jupyterlite/notebooks/index.html?path=@public/examples/large-dataset-indexing.ipynb), it looks like times are much better:
Dataset size: 3814.70 MB
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 45 ms
---
<class 'blosc2.c2array.C2Array'> (1000, 1000, 1000) int32
[[500302900 500302901 500302902 500302903 500302904]
[501302900 501302901 501302902 501302903 501302904]]
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 43 ms
---
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 37 ms
---
<class 'caterva2.client.Dataset'> (1000, 1000, 1000) int32
[[500302900 500302901 500302902 500302903 500302904]
[501302900 501302901 501302902 501302903 501302904]]
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 45 ms
these times are very close to the latency to the cat2.cloud machine:
> ping cat2.cloud (blosc2)
PING cat2.cloud (178.63.45.221): 56 data bytes
64 bytes from 178.63.45.221: icmp_seq=0 ttl=55 time=44.826 ms
64 bytes from 178.63.45.221: icmp_seq=1 ttl=55 time=48.003 ms
64 bytes from 178.63.45.221: icmp_seq=2 ttl=55 time=48.419 ms
64 bytes from 178.63.45.221: icmp_seq=3 ttl=55 time=47.105 ms
64 bytes from 178.63.45.221: icmp_seq=4 ttl=55 time=48.079 ms
64 bytes from 178.63.45.221: icmp_seq=5 ttl=55 time=48.555 ms
64 bytes from 178.63.45.221: icmp_seq=6 ttl=55 time=46.870 ms
64 bytes from 178.63.45.221: icmp_seq=7 ttl=55 time=48.645 ms
64 bytes from 178.63.45.221: icmp_seq=8 ttl=55 time=47.701 ms
64 bytes from 178.63.45.221: icmp_seq=9 ttl=55 time=45.121 ms
64 bytes from 178.63.45.221: icmp_seq=10 ttl=55 time=44.740 ms
^C
--- cat2.cloud ping statistics ---
11 packets transmitted, 11 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 44.740/47.097/48.645/1.449 ms
I suppose what is happening is that the embedded notebook is running through the browser, and it is using HTTP2 by default.
This shows that we can improve latency quite a lot if we can make clients to use HTTP2 too.
Metadata
Metadata
Assignees
Labels
No labels