《深度、有料!PostgreSQL 9.6 并行度的源码分析》要点:
本文介绍了深度、有料!PostgreSQL 9.6 并行度的源码分析,希望对您有用。如果有疑问,可以联系我们。
相关主题:PostgreSQL教程
PostgreSQL 9.6的并行复制一发,相信已经有很多小伙伴已经开始测试了,我昨晚测试了一个场景是标签系统类应用的比特位运算,昨天测试发现性能相比非并行已经提升了7倍.
昨天没有仔细研究代码,发现怎么测都只能用8个并行,本日看了一下代码,终于找到端倪了,其实并行度是由几个方面决定的, 决定并行度的几个参数
.1. 最大允许的并行度
max_parallel_degree
.2. 表设置的并行度(create table或alter table设置)
parallel_degree
如果设置了表的并行度,则最终并行度取min(max_parallel_degree , parallel_degree )
/*
* Use the table parallel_degree, but don't go further than
* max_parallel_degree.
*/
parallel_degree = Min(rel->rel_parallel_degree, max_parallel_degree);
.3. 如果表没有设置并行度parallel_degree ,则根据表的大小 和 parallel_threshold 这个硬编码值决定,计算得出(见函数create_plain_partial_paths)
然后依旧受到max_parallel_degree 参数的限制,不克不及大于它.
代码如下
src/backend/optimizer/util/plancat.cvoidget_relation_info(PlannerInfo *root, Oid relationObjectId, bool inhparent,
RelOptInfo *rel){
... /* Retrive the parallel_degree reloption, if set. */
rel->rel_parallel_degree = RelationGetParallelDegree(relation, -1);
...
src/include/utils/rel.h/*
* RelationGetParallelDegree
* Returns the relation's parallel_degree. Note multiple eval of argument!
*/#define RelationGetParallelDegree(relation, defaultpd) \
((relation)->rd_options ? \
((StdRdOptions *) (relation)->rd_options)->parallel_degree : (defaultpd))src/backend/optimizer/path/allpaths.c/*
* create_plain_partial_paths
* Build partial access paths for parallel scan of a plain relation
*/static voidcreate_plain_partial_paths(PlannerInfo *root, RelOptInfo *rel){ int parallel_degree = 1; /*
* If the user has set the parallel_degree reloption, we decide what to do
* based on the value of that option. Otherwise, we estimate a value.
*/
if (rel->rel_parallel_degree != -1)
{ /*
* If parallel_degree = 0 is set for this relation, bail out. The
* user does not want a parallel path for this relation.
*/
if (rel->rel_parallel_degree == 0) return; /*
* Use the table parallel_degree, but don't go further than
* max_parallel_degree.
*/
parallel_degree = Min(rel->rel_parallel_degree, max_parallel_degree);
} else
{ int parallel_threshold = 1000; /*
* If this relation is too small to be worth a parallel scan, just
* return without doing anything ... unless it's an inheritance child.
* In that case, we want to generate a parallel path here anyway. It
* might not be worthwhile just for this relation, but when combined
* with all of its inheritance siblings it may well pay off.
*/
if (rel->pages < parallel_threshold &&
rel->reloptkind == RELOPT_BASEREL) return;// 表级并行度没有设置时,通过表的大小和parallel_threshold 计算并行度
/*
* Limit the degree of parallelism logarithmically based on the size
* of the relation. This probably needs to be a good deal more
* sophisticated, but we need something here for now.
*/
while (rel->pages > parallel_threshold * 3 &&
parallel_degree < max_parallel_degree)
{
parallel_degree++;
parallel_threshold *= 3; if (parallel_threshold >= PG_INT32_MAX / 3) break;
}
} /* Add an unordered partial path based on a parallel sequential scan. */
add_partial_path(rel, create_seqscan_path(root, rel, NULL, parallel_degree));
}
测试:
增加到32个并行,因为有32核.
postgres=# alter table t_bit2 set (parallel_degree =32);postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';
QUERY
PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize Aggregate (cost=1551053.25..1551053.26 rows=1 width=8) (actual time=31092.551..31092.552 rows=1 loops=1)
Output: count(*)
Buffers: shared hit=1473213
-> Gather (cost=1551049.96..1551053.17 rows=32 width=8) (actual time=31060.939..31092.469 rows=33 loops=1)
Output: (PARTIAL count(*))
Workers Planned: 32
Workers Launched: 32
Buffers: shared hit=1473213
-> Partial Aggregate (cost=1550049.96..1550049.97 rows=1 width=8) (actual time=31047.074..31047.075 rows=1 loops=33)
Output: PARTIAL count(*)
Buffers: shared hit=1470589
Worker 0: actual time=31037.287..31037.288 rows=1 loops=1
Buffers: shared hit=43483
Worker 1: actual time=31035.803..31035.804 rows=1 loops=1
Buffers: shared hit=45112
Worker 2: actual time=31036.950..31036.951 rows=1 loops=1
Buffers: shared hit=43238
Worker 3: actual time=31063.823..31063.823 rows=1 loops=1
Buffers: shared hit=43931
Worker 4: actual time=31035.934..31035.935 rows=1 loops=1
Buffers: shared hit=42676
Worker 5: actual time=31035.334..31035.335 rows=1 loops=1
Buffers: shared hit=45662
Worker 6: actual time=31038.237..31038.238 rows=1 loops=1
Buffers: shared hit=44882
Worker 7: actual time=31043.767..31043.767 rows=1 loops=1
Buffers: shared hit=47740
Worker 8: actual time=31038.297..31038.297 rows=1 loops=1
Buffers: shared hit=47779
Worker 9: actual time=31056.614..31056.614 rows=1 loops=1
Buffers: shared hit=43574
Worker 10: actual time=31040.406..31040.406 rows=1 loops=1
Buffers: shared hit=48292
Worker 11: actual time=31041.379..31041.384 rows=1 loops=1
Buffers: shared hit=48654
Worker 12: actual time=31019.450..31019.451 rows=1 loops=1
Buffers: shared hit=44657
Worker 13: actual time=31040.666..31040.668 rows=1 loops=1
Buffers: shared hit=42903
Worker 14: actual time=31029.439..31029.440 rows=1 loops=1
Buffers: shared hit=51098
Worker 15: actual time=31032.364..31032.364 rows=1 loops=1
Buffers: shared hit=48112
Worker 16: actual time=31043.330..31043.330 rows=1 loops=1
Buffers: shared hit=40712
Worker 17: actual time=31052.240..31052.241 rows=1 loops=1
Buffers: shared hit=43938
Worker 18: actual time=31052.810..31052.811 rows=1 loops=1
Buffers: shared hit=46617
Worker 19: actual time=31052.894..31052.894 rows=1 loops=1
Buffers: shared hit=40536
Worker 20: actual time=31053.521..31053.521 rows=1 loops=1
Buffers: shared hit=43820
Worker 21: actual time=31054.699..31054.699 rows=1 loops=1
Buffers: shared hit=43356
Worker 22: actual time=31055.046..31055.047 rows=1 loops=1
Buffers: shared hit=44030
Worker 23: actual time=31055.070..31055.073 rows=1 loops=1
Buffers: shared hit=40500
Worker 24: actual time=31055.108..31055.108 rows=1 loops=1
Buffers: shared hit=42840
Worker 25: actual time=31054.733..31054.735 rows=1 loops=1
Buffers: shared hit=40342
Worker 26: actual time=31055.962..31055.963 rows=1 loops=1
Buffers: shared hit=44344
Worker 27: actual time=31056.279..31056.280 rows=1 loops=1
Buffers: shared hit=47810
Worker 28: actual time=31056.324..31056.325 rows=1 loops=1
Buffers: shared hit=44747
Worker 29: actual time=31056.259..31056.259 rows=1 loops=1
Buffers: shared hit=43673
Worker 30: actual time=31057.195..31057.195 rows=1 loops=1
Buffers: shared hit=40444
Worker 31: actual time=31055.871..31055.876 rows=1 loops=1
Buffers: shared hit=46439
-> Parallel Seq Scan on public.t_bit2 (cost=0.00..1549983.80 rows=26465 width=0) (actual time=0.040..17244.827 rows=6060606 loops=33)
Output: id
Filter: (bitand(t_bit2.id, B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010'::"bit") = B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010'::"bit")
Buffers: shared hit=1470589
Worker 0: actual time=0.035..17314.296 rows=5913688 loops=1
Buffers: shared hit=43483
Worker 1: actual time=0.030..16965.158 rows=6135232 loops=1
Buffers: shared hit=45112
Worker 2: actual time=0.040..17097.631 rows=5880368 loops=1
Buffers: shared hit=43238
Worker 3: actual time=0.026..17121.847 rows=5974616 loops=1
Buffers: shared hit=43931
Worker 4: actual time=0.057..17024.045 rows=5803936 loops=1
Buffers: shared hit=42676
Worker 5: actual time=0.048..17249.414 rows=6210032 loops=1
Buffers: shared hit=45662
Worker 6: actual time=0.039..17190.435 rows=6103952 loops=1
Buffers: shared hit=44882
Worker 7: actual time=0.043..17203.755 rows=6492640 loops=1
Buffers: shared hit=47740
Worker 8: actual time=0.046..17204.168 rows=6497944 loops=1
Buffers: shared hit=47779
Worker 9: actual time=0.026..17088.716 rows=5926064 loops=1
Buffers: shared hit=43574
Worker 10: actual time=0.041..17114.139 rows=6567712 loops=1
Buffers: shared hit=48292
Worker 11: actual time=0.038..17237.905 rows=6616944 loops=1
Buffers: shared hit=48654
Worker 12: actual time=0.138..17259.257 rows=6073352 loops=1
Buffers: shared hit=44657
Worker 13: actual time=0.060..17204.828 rows=5834808 loops=1
Buffers: shared hit=42903
Worker 14: actual time=0.041..17168.707 rows=6949328 loops=1
Buffers: shared hit=51098
Worker 15: actual time=0.034..17294.266 rows=6543232 loops=1
Buffers: shared hit=48112
Worker 16: actual time=0.037..17166.335 rows=5536832 loops=1
Buffers: shared hit=40712
Worker 17: actual time=0.033..17224.710 rows=5975568 loops=1
Buffers: shared hit=43938
Worker 18: actual time=0.027..17218.971 rows=6339912 loops=1
Buffers: shared hit=46617
Worker 19: actual time=0.034..17227.116 rows=5512896 loops=1
Buffers: shared hit=40536
Worker 20: actual time=0.033..17169.460 rows=5959520 loops=1
Buffers: shared hit=43820
Worker 21: actual time=0.033..17176.166 rows=5896416 loops=1
Buffers: shared hit=43356
Worker 22: actual time=0.024..17273.591 rows=5988080 loops=1
Buffers: shared hit=44030
Worker 23: actual time=0.026..17370.737 rows=5508000 loops=1
Buffers: shared hit=40500
Worker 24: actual time=0.029..17087.689 rows=5826240 loops=1
Buffers: shared hit=42840
Worker 25: actual time=0.060..17293.165 rows=5486512 loops=1
Buffers: shared hit=40342
Worker 26: actual time=0.024..17339.611 rows=6030680 loops=1
Buffers: shared hit=44344
Worker 27: actual time=0.035..17416.782 rows=6502160 loops=1
Buffers: shared hit=47810
Worker 28: actual time=0.033..17478.751 rows=6085592 loops=1
Buffers: shared hit=44747
Worker 29: actual time=0.037..17318.121 rows=5939528 loops=1
Buffers: shared hit=43673
Worker 30: actual time=0.058..17525.592 rows=5500384 loops=1
Buffers: shared hit=40444
Worker 31: actual time=0.031..17580.908 rows=6315704 loops=1
Buffers: shared hit=46439
Planning time: 0.354 ms
Execution time: 31157.006 ms
(145 rows)
比特位运算
postgres=# select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';
count
----------- 200000000(1 row)
Time: 4320.931 ms
COUNT
postgres=# select count(*) from t_bit2;
count
----------- 200000000(1 row)
Time: 1896.647 ms
关闭并行的查询效率
postgres=# set force_parallel_mode =off;SET
postgres=# alter table t_bit2 set (parallel_degree =0);ALTER TABLE
postgres=# \timing
Timing is on.
postgres=# select count(*) from t_bit2 where bitand(id, '10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010')=B'10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010';
count
----------- 200000000(1 row)
Time: 53098.480 ms
postgres=# select count(*) from t_bit2;
count
----------- 200000000(1 row)
Time: 18504.679 ms
表大小
postgres=# \dt+ t_bit2
List of relations
Schema | Name | Type | Owner | Size | Description
--------+--------+-------+----------+-------+-------------
public | t_bit2 | table | postgres | 11 GB |
(1 row)
参考信息
http://www.postgresql.org/docs/9.6/static/sql-createtable.html
parallel_degree (integer)
The parallel degree for a table is the number of workers that should be used to assist a parallel scan of that table. If not set, the system will determine a value based on the relation size. The actual number of workers chosen by the planner may be less, for example due to the setting of max_parallel_degree.
http://www.postgresql.org/docs/9.6/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER
force_parallel_mode (enum)
Allows the use of parallel queries for testing purposes even in cases where no performance benefit is expected. The allowed values of force_parallel_mode are off (use parallel mode only when it is expected to improve performance), on (force parallel query for all queries for which it is thought to be safe), and regress (like on, but with additional behavior changes as explained below).
More specifically, setting this value to on will add a Gather node to the top of any query plan for which this appears to be safe, so that the query runs inside of a parallel worker. Even when a parallel worker is not available or cannot be used, operations such as starting a subtransaction that would be prohibited in a parallel query context will be prohibited unless the planner believes that this will cause the query to fail. If failures or unexpected results occur when this option is set, some functions used by the query may need to be marked PARALLEL UNSAFE (or, possibly, PARALLEL RESTRICTED).
Setting this value to regress has all of the same effects as setting it to on plus some additional effects that are intended to facilitate automated regression testing. Normally, messages from a parallel worker include a context line indicating that, but a setting of regress suppresses this line so that the output is the same as in non-parallel execution. Also, the Gather nodes added to plans by this setting are hidden in EXPLAIN output so that the output matches what would be obtained if this setting were turned off.
http://www.postgresql.org/docs/9.6/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-ASYNC-BEHAVIOR
max_parallel_degree (integer)
Sets the maximum number of workers that can be started for an individual parallel operation. Parallel workers are taken from the pool of processes established by max_worker_processes. Note that the requested number of workers may not actually be available at runtime. If this occurs, the plan will run with fewer workers than expected, which may be inefficient. The default value is 2. Setting this value to 0 disables parallel query execution.
http://www.postgresql.org/docs/9.6/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS
parallel_setup_cost (floating point)
Sets the planner's estimate of the cost of launching parallel worker processes. The default is 1000.
parallel_tuple_cost (floating point)
Sets the planner's estimate of the cost of transferring one tuple from a parallel worker process to another process. The default is 0.1.
欢迎存眷云栖社区微信公众号:yunqiinsight,更多深度、有料的技术内容等着你.
欢迎参与《深度、有料!PostgreSQL 9.6 并行度的源码分析》讨论,分享您的想法,维易PHP学院为您提供专业教程。