pg11
select count 没可以并行
sdktest=> explain(analyze,verbose,costs) select count(*) from t_l_game_login_log;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=949099.50..949099.51 rows=1 width=8) (actual time=1926.216..1926.217 rows=1 loops=1)
Output: count(*)
-> Gather (cost=949099.46..949099.47 rows=12 width=8) (actual time=1923.655..1967.013 rows=13 loops=1)
Output: (PARTIAL count(*))
Workers Planned: 12
Workers Launched: 12
-> Partial Aggregate (cost=948099.46..948099.47 rows=1 width=8) (actual time=1856.612..1856.613 rows=1 loops=13)
Output: PARTIAL count(*)
Worker 0: actual time=1849.929..1849.929 rows=1 loops=1
Worker 1: actual time=1855.286..1855.287 rows=1 loops=1
Worker 2: actual time=1848.387..1848.387 rows=1 loops=1
Worker 3: actual time=1849.642..1849.642 rows=1 loops=1
Worker 4: actual time=1849.865..1849.865 rows=1 loops=1
Worker 5: actual time=1849.815..1849.816 rows=1 loops=1
Worker 6: actual time=1859.493..1859.493 rows=1 loops=1
Worker 7: actual time=1855.420..1855.420 rows=1 loops=1
Worker 8: actual time=1849.899..1849.900 rows=1 loops=1
Worker 9: actual time=1849.641..1849.641 rows=1 loops=1
Worker 10: actual time=1849.872..1849.872 rows=1 loops=1
Worker 11: actual time=1849.886..1849.887 rows=1 loops=1
不是 count 就不行了...
sdktest=> explain(analyze,verbose,costs) select time_bucket('1 days', count_time) as day1, COUNT(id) from t_l_game_login_log where count_time > now() - interval '60 day' group by day1, app_id;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=807049.04..807172.99 rows=9916 width=20) (actual time=12512.462..12513.110 rows=2755 loops=1)
Output: (time_bucket('1 day'::interval, t_l_game_login_log.count_time)), count(t_l_game_login_log.id), t_l_game_login_log.app_id
Group Key: time_bucket('1 day'::interval, t_l_game_login_log.count_time), t_l_game_login_log.app_id
-> Custom Scan (ConstraintAwareAppend) (cost=0.00..712676.29 rows=12583034 width=16) (actual time=0.065..10334.102 rows=12650070 loops=1)
Output: time_bucket('1 day'::interval, t_l_game_login_log.count_time), t_l_game_login_log.app_id, t_l_game_login_log.id
Hypertable: t_l_game_login_log
Chunks left after exclusion: 36
-> Append (cost=0.00..681218.70 rows=12583034 width=16) (actual time=0.063..8630.928 rows=12650070 loops=1)
-> Index Scan using _hyper_32_1429_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1429_chunk (cost=0.43..14150.17 rows=259580 width=16) (actual time=0.062..154.435 rows=277763 loops=1)
Output: _hyper_32_1429_chunk.count_time, _hyper_32_1429_chunk.id, _hyper_32_1429_chunk.app_id
Index Cond: (_hyper_32_1429_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1430_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1430_chunk (cost=0.43..5594.26 rows=106791 width=16) (actual time=0.058..56.189 rows=106399 loops=1)
Output: _hyper_32_1430_chunk.count_time, _hyper_32_1430_chunk.id, _hyper_32_1430_chunk.app_id
Index Cond: (_hyper_32_1430_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1431_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1431_chunk (cost=0.43..11419.14 rows=213147 width=16) (actual time=0.067..141.589 rows=257787 loops=1)
Output: _hyper_32_1431_chunk.count_time, _hyper_32_1431_chunk.id, _hyper_32_1431_chunk.app_id
Index Cond: (_hyper_32_1431_chunk.count_time > (now() - '60 days'::interval))
-> Index Scan using _hyper_32_1432_chunk_t_l_game_login_log_count_time_idx on _timescaledb_internal._hyper_32_1432_chunk (cost=0.42..2065.82 rows=39474 width=16) (actual time=0.064..26.715 rows=44729 loops=1)
Output: _hyper_32_1432_chunk.count_time, _hyper_32_1432_chunk.id, _hyper_32_1432_chunk.app_id
Index Cond: (_hyper_32_1432_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1445_chunk (cost=0.00..27784.83 rows=572733 width=16) (actual time=0.015..331.596 rows=572733 loops=1)
Output: _hyper_32_1445_chunk.count_time, _hyper_32_1445_chunk.id, _hyper_32_1445_chunk.app_id
Filter: (_hyper_32_1445_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1446_chunk (cost=0.00..31036.78 rows=627016 width=16) (actual time=0.016..364.134 rows=626923 loops=1)
Output: _hyper_32_1446_chunk.count_time, _hyper_32_1446_chunk.id, _hyper_32_1446_chunk.app_id
Filter: (_hyper_32_1446_chunk.count_time > (now() - '60 days'::interval))
-> Seq Scan on _timescaledb_internal._hyper_32_1447_chunk (cost=0.00..12388.78 rows=262216 width=16) (actual time=0.016..149.345 rows=262216 loops=1)
Output: _hyper_32_1447_chunk.count_time, _hyper_32_1447_chunk.id, _hyper_32_1447_chunk.app_id
强制并行查询也不行
是我的问题还是 TimescaleDB 不支持并行?
google 也查不到 orz
这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。
V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。
V2EX is a community of developers, designers and creative people.