Millisecond Marketing with Trillions of User Tags Using PostgreSQL

本文涉及的产品
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
简介: By taking advantage of two PostgreSQL functions, we can solve one of the biggest problems plaguing the industry – building an accurate recommendation system.

Alibaba_Cloud_Whitepaper_Securing_the_Data_Center_in_a_Cloud_First_World

Background

By taking advantage of just two small PostgreSQL functions, we are able to solve one of the biggest long-term problems plaguing the industry.

The recommendation system is the backbone of any advertising platform. A proper recommendation system must be precise, real-time, and highly efficient.

But which advertisement platform is the best one? Which advertisement platform has the strongest core? To have an effective recommendation system, it needs to have three fundamental features: precise tagging, real-time tag updating, and efficient tagging.

Precision refers to the ability to draw distinct, precise user portraits based on massive amounts of user behavior data. This profiling system is also called a tag system. Effective recommendation often hinges on accurate tagging. For example, you likely would not market a pair of reading glasses to the average young man, unless his purchase intention tags demonstrate that he might wish to purchase a pair of reading glasses.

Tags must be updated in real time. Many tags are extremely time sensitive, for example marketing to certain customers may only be effective within a certain time window. Likewise, a certain product may be appropriate one minute and less so the next. If your tags are generated every other day or longer, you may miss the best recommendation opportunity. Therefore, it is critical for a recommendation system to operate in real time.

Efficiency refers to the system's ability to concurrently react and adapt to user behavior. After all, those of us in the advertisement industry need to obtain information as quickly as possible. If you have several clients purchasing advertisements, your system's capacity for concurrency will be tested before long.

An advertisement platform may only be competitive if it possesses the above three core capabilities. Of course, cost is also a concern, particularly hardware, development, and maintenance costs.

We will use an E-commerce recommendation system as an example to demonstrate database design and optimization techniques for recommendation systems.

Requirements of E-commerce Recommendation Systems

For example, how does a store discovers target customers?

To answer this question, we first need to collect some data. For example:

1.Users who browse and make purchases at this and other similar stores. Browsing online stores generates behavior records, for instance the stores and commodities we browse, the products we purchase, the stores where we make those purchases, and the times at which all of this happens. Then, for each store, a sample of users and browsed and purchased there can be extracted.

2.After obtaining these user groups, you can filter them for similar purchase intention or other attributes. You can analyze the attributes of these groups to form a larger target group and market to them.

These are just two simple deductions that can be made concerning E-commerce recommendation systems.

Reach and Magnitude

The number of e-shoppers may reach several billion worldwide, and stores could reach the order of tens of millions.

The number of available products (divided into subcategories like bar codes) might even reach several hundred million.

Store tags for a single user may include the stores the user visited, the time of visit, products viewed and number of times they were viewed, and finally the products purchased by the user. Hundreds of such tags can be generated for a person over time.

When the number of user tags reaches several hundred of thousand, we get a clearer picture of the user and his/her habits.

Estimates show that we could see trillions of user_tags in the near future (several billion users, each with hundreds of tags related to stores/products).

Efficient Database Design

First, we need to organize key data.

Data include user ID, browsed stores and times, browsed products and times, and purchased products and quantity (times/quantity can be set to represent a range. For example, 0 can represent any quantity less than 100, 1 can be a quantity between 100 and 500, etc.).

These data points can be easily generated from user behavior data, and in turn combined to obtain a target customer group for store or product specific advertisement. For example, we may use this data to define a group of users who browsed haircare related products ten times in one week.

Designing table structure

1.Generate IDs for stores and products.

2.Track browsing times and quantity of purchased products.

Each user may browse and purchase multiple products over a certain period of time. If a record is generated for each store and product, we will wind up with a massive number of records. This can be a waste of space and negatively impact query efficiency.

Arrays can be used in PostgreSQL to make these tasks possible while saving storage space. It also allows for array indexes, which improves query efficiency.

3.The table structure is as follows:

3.1Range table, including dozens or even hundreds of records

Field and description

class int,    -- Dimension, corresponding to s1, s2, s3, s4 in a user tag table
id int,       -- Offset amount (or enumeration value)    
description   -- Description (such as 1-10,000, 10,001-100,000, etc.)    

3.2User tag table

uid int primary key,  -- User ID  
s1 int[],  -- Browsed stores and time ranges (store ID hash + range table ID)     
s2 int[],  -- Browsed products and time ranges (product ID hash + range table ID)  
s3 int[],  -- Purchased products and quantity ranges (product ID hash + range table ID)
s4 int[],   -- ... Other dimensions and the like 
Time interval 1, ... -- e.g. 1 day: count each day and write the data into the table

3.3Step-wise times

Browsing times and purchase quantity are continuous values. It is suggested that they be step-wise for better mining.

According to the design in item 3.1, 1-10,000 constitutes a step, and 10,001-100,000 constitutes the next step.

Example

Step corresponding to track s1
1 -> 0
2 -> 1-10
3 -> 11-100
4 -> 101-500
5 -> 501-
...
9 -> 10000+

3.4Combine product and store IDs and the step into a new value - method 2

The value is stored in text[], for example store ID:OFFSET. A text array (text[]) is less efficient than an integer array (INT[]), and it requires more storage space.

However, if you can handle the downsides, text[] does require slightly less development effort.

Example

userid|s1|s2|s3 1|{'1:2', '109:9'}|{'2:2', '88:19'}|{'2:1', '88:2'}
Meaning:
• User ID: 1,
• Browsed store 1 (step=2) and store 109 (step 9),
• Browsed product 2 (step=2) and product 88 (step 19),
• Purchased product 2 (step=1) and product 88 (step 2).

3.5Combine product and store IDs and the step into a new value - method 2

The first method uses a text array, while the second method uses a more efficient int/int8 array.

A formula must be used to make the int/int8 value express two meanings (the original store, product ID, and step).

The formula design (formula and constant) is as follows:

Take the number of visits to the store (s1) field as an example:

Starting ID of new value = new_start_val = 1                  -- This constant can be freely defined but cannot be changed once it has been set.
Corresponding step (such as the number of steps for the traffic to the store) = step = 9   -- This constant can be freely defined (the number of steps of each dimension can be different), and cannot be changed once it is set.
Store ID = dp_id                                  -- The original store ID
int/int8 new value = new_val                          -- The new generated value with two meanings (product ID and number of steps)

If the store ID is known, calculate new_val (a write and query process):

$new_val = new_start_val + (step+1)*(dp_id-1)

If new_val is known, calculate the store ID (a translation process):

$dp_id = 1 + (new_val-new_start_val)/(step+1)

Example (step is 19, and new_start_val is 1)

Browsed store ID=1, indicating 1 step
Browsed store ID=192, indicating 15 steps

Use the information above, the constant, and the formula to generate the new_val for the array:  

{1, 3821}

Translate the store ID based on the array above, constant, and formula:  

{1, 192}

4.Shard

For example, it is suggested that each 5 million or 10 million users be organized into a partition. Query efficiency can be improved using parallel query.

You are recommended to use parallel query (with plproxy, a connection is given for each partition to carry out parallel query) to quickly retrieve all users.

If you want to obtain users quickly, and need stream return, you should use inheritance (if it is a multi-node, postgres_fdw+pg_pathman or postgres_fdw+inheritance is used) to return via a cursor.

Performance indicators

Browsed stores and products, purchased products, and the billions of users tracked over the specified time interval are all aggregated into a tag array to generate over a trillion user tags.

What kind of performance can we expect when parsing billions of user groups based on tags? Since we are using indexing, we can keep the processing time to around 10 ms as long as we also use stream return. Is it starting to feel like the analytics industry is going to be measured in milliseconds?

If you're not familiar with PostgreSQL, you may find these results surprising, but you'll quickly get used to it as you come into contact with PostgreSQL more frequently. It provides a whole host of functions to help you solve a wide variety of problems

Real-time Design

We've already explained how to efficiently obtain user data, now let's look at how we can update user tags in real time.

Stream processing

The goal of stream processing is to update user tags in real time. For example if a user generates hundreds of bytes of browsing records every second, then these records need to be merged into the appropriate tags.

With a hundred million or more active users, these update streams need to process up to a trillion updates a day, so how can we update the database in real time? A lot of users probably use the T+1 method and simply give up on real-time responsiveness.

However in reality it isn't an unreasonable thing to ask. For example we can use the stream processing function in PostgreSQL to update super high traffic streams.

You may be wondering whether the database is capable of processing such a stream. How is the data updated in real time in the database?

An open-source product called PipelineDB (based on PostgreSQL and compatible with PostgreSQL) in PostgreSQL is designed to do just this. It helps you merge data in real time (you can set the time interval or the number of accumulated updated rows), and carries out persistent actions after reaching the threshold; otherwise, it first updates continuously in the memory.

There are two articles available for reference

Application of Stream Processing for the "Internet of Things (IoT)" - Use PostgreSQL for Real-time Processing (Trillions Each Day)

Ever-present StreamCompute - Combining PostgreSQL and PipelineDB for IoT

Of course, if it isn't necessary to process data in real time, and T+1 meets your requirements, then it's not necessary to use Pipeline DB.

Why does stream processing need to be completed in the database?

We know that target customer discovery can only be completed when the tag data has been submitted to the database, so if we don't use stream calculation, rather use frames like JSTROM, then a layer of updates will be ignored. For example, 100 billion updates could become only 100 million.

However, external stream processing does bring with it a number of challenges.

1.JSTROM requires additional computing resources without adding parallel efficiency compared to PipelineDB.

2.Furthermore, it does not provide faster queries than StreamCompute when directly inserting data into the database.

3.It also increases development costs.

Stress Testing

For the stress testing phase I chose a machine with 32 cores, 2 SSD cards, and 512GB memory.

It stores 3.2 hundred million users. Each user includes 4 array fields. Each field includes 1,000 elements, i.e., 4,000*3,200 million = 1,280,billion user_tags.

Example 1

There are 10 tables, each with 10 million users, and 4 tag fields stored with tsvector.

Use a rum index.

postgres=# create tablespace tbs1 location '/u01/digoal/tbs1';  
CREATE TABLESPACE  
  
postgres=# create tablespace tbs2 location '/u02/digoal/tbs2';  
CREATE TABLESPACE  
  
do language plpgsql 
$$
  
declare  
  i int;  
  suffix text;  
  tbs text;  
begin  
  for i in 0..10 loop  
    if i=0 then  
      suffix := '';  
      tbs := 'tbs1';  
    elsif i >=1 and i<=5 then  
      suffix := i::text;  
      tbs := 'tbs1';  
    else  
      suffix := i::text;  
      tbs := 'tbs2';  
    end if;  
    if i=0 then  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 tsvector, s2 tsvector, s3 tsvector, s4 tsvector) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    else  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 tsvector, s2 tsvector, s3 tsvector, s4 tsvector) inherits(test) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    end if;  
    execute 'create index idx_test'||suffix||'_s1 on test'||suffix||' using rum(s1 rum_tsvector_ops) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s2 on test'||suffix||' using rum(s2 rum_tsvector_ops) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s3 on test'||suffix||' using rum(s3 rum_tsvector_ops) tablespace '||tbs;   
    execute 'create index idx_test'||suffix||'_s4 on test'||suffix||' using rum(s4 rum_tsvector_ops) tablespace '||tbs;  
  end loop;  
end;  

$$
;  
  
select relname,reltablespace from pg_class  where relname ~ 'test' order by 2,1;  

Generate the test data script

vi test.sql  
  
\set uid1 random(1,10000000)  
\set uid2 random(10000001,20000000)  
\set uid3 random(20000001,30000000)  
\set uid4 random(30000001,40000000)  
\set uid5 random(40000001,50000000)  
\set uid6 random(50000001,60000000)  
\set uid7 random(60000001,70000000)  
\set uid8 random(70000001,80000000)  
\set uid9 random(80000001,90000000)  
\set uid10 random(90000001,100000000)  
insert into test1 (uid,s1,s2,s3,s4) select :uid1+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test2 (uid,s1,s2,s3,s4) select :uid2+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test3 (uid,s1,s2,s3,s4) select :uid3+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test4 (uid,s1,s2,s3,s4) select :uid4+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test5 (uid,s1,s2,s3,s4) select :uid5+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test6 (uid,s1,s2,s3,s4) select :uid6+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test7 (uid,s1,s2,s3,s4) select :uid7+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test8 (uid,s1,s2,s3,s4) select :uid8+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test9 (uid,s1,s2,s3,s4) select :uid9+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
insert into test10 (uid,s1,s2,s3,s4) select :uid10+id, (select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)),(select array_to_tsvector(array_agg(trunc(5000000*random())||'_'||trunc(20*random()))) from generate_series(1,1000)) from generate_series(1,1000) t(id) on conflict do nothing;  
  
  
nohup pgbench -M prepared -n -r -P 1 -f ./test.sql -c 64 -j 64 -T 1000000 >/dev/null 2>&1 &  

A tag is generated by the process of combining 5 million unique IDs and 20 unique IDs, and each tsvector stores 1,000 such combinations.

Example 2

There are 10 tables, each with 10 million users, and 4 tag fields stored in a text[].

Here we use a GIN index. Other factors are the same as in example 1.

do language plpgsql 
$$
  
declare  
  i int;  
  suffix text;  
  tbs text;  
begin  
  for i in 0..10 loop  
    if i=0 then  
      suffix := '';  
      tbs := 'tbs1';  
    elsif i >=1 and i<=5 then  
      suffix := i::text;  
      tbs := 'tbs1';  
    else  
      suffix := i::text;  
      tbs := 'tbs2';  
    end if;  
    if i=0 then  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 text[], s2 text[], s3 text[], s4 text[]) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    else  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 text[], s2 text[], s3 text[], s4 text[]) inherits(test) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    end if;  
    execute 'create index idx_test'||suffix||'_s1 on test'||suffix||' using gin(s1 ) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s2 on test'||suffix||' using gin(s2 ) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s3 on test'||suffix||' using gin(s3 ) tablespace '||tbs;   
    execute 'create index idx_test'||suffix||'_s4 on test'||suffix||' using gin(s4 ) tablespace '||tbs;  
  end loop;  
end;  

$$
;  
  
select relname,reltablespace from pg_class  where relname ~ 'test' order by 2,1;  

Example 3 (pressure testing)

There are 64 partition tables, each with 5 million records. An int array is used to store a total of 4 million tags, and each user has 4,000 random tags to ensure that there are enough target customers.

We also use a GIN index for target customer discovery.

alter role postgres set gin_pending_list_limit='128MB';  
  
do language plpgsql 
$$
  
declare  
  i int;  
  suffix text;  
  tbs text;  
begin  
  for i in 0..64 loop  
    if i=0 then  
      suffix := '';  
      tbs := 'tbs1';  
    elsif i >=1 and i<=32 then  
      suffix := i::text;  
      tbs := 'tbs1';  
    else  
      suffix := i::text;  
      tbs := 'tbs2';  
    end if;  
    if i=0 then  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 int[], s2 int[], s3 int[], s4 int[]) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    else  
      execute 'create unlogged table test'||suffix||'(uid int primary key USING INDEX TABLESPACE '||tbs||', s1 int[], s2 int[], s3 int[], s4 int[]) inherits(test) with (autovacuum_enabled=off, toast.autovacuum_enabled=off) tablespace '||tbs;  
    end if;  
    execute 'create index idx_test'||suffix||'_s1 on test'||suffix||' using gin(s1 ) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s2 on test'||suffix||' using gin(s2 ) tablespace '||tbs;  
    execute 'create index idx_test'||suffix||'_s3 on test'||suffix||' using gin(s3 ) tablespace '||tbs;   
    execute 'create index idx_test'||suffix||'_s4 on test'||suffix||' using gin(s4 ) tablespace '||tbs;  
  end loop;  
end;  

$$
;  
  
select relname,reltablespace from pg_class  where relname ~ 'test' order by 2,1;  

Script for generating test data

vi test1.sh  
  
for ((i=1;i<=64;i++))  
do  
echo "\set uid random($((($i-1)*5000000+1)),$(($i*5000000)))" > test$i.sql  
  
echo "insert into test$i (uid,s1,s2,s3,s4) select :uid, (select array_agg(trunc(random()*4000000)) from generate_series(1,1000)) s1,(select array_agg(trunc(random()*4000000)) from generate_series(1,1000)) s2,(select array_agg(trunc(random()*4000000)) from generate_series(1,1000)) s3, (select array_agg(trunc(random()*4000000)) from generate_series(1,1000)) s4 on conflict do nothing;" >> test$i.sql  
  
done  
  
. ./test1.sh  

Begin generating test data

vi test2.sh  
  
for ((i=1;i<=64;i++))  
do  
nohup pgbench -M prepared -n -r -P 1 -f ./test$i.sql -c 1 -j 1 -T 1000000 >/dev/null 2>&1 &  
done  
  
. ./test2.sh   

Once the data has been inserted, merge the pending list.

Execute vacuum analyze or gin_clean_pending_list. Refer to

https://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-INDEX

https://www.postgresql.org/docs/9.6/static/sql-vacuum.html

https://www.postgresql.org/docs/9.6/static/gin-implementation.html#GIN-FAST-UPDATE

Target customer discovery requirements - performance testing

Carry out pressure testing on example 3

1.Complete target customer discovery in 10 ms.

For example, search for groups in which s1 includes 3 and s2 includes 4.

postgres=# begin;
BEGIN
Time: 0.030 ms
postgres=# declare a cursor for select uid from test where s1 @> array[1] and s2 @> array[4];
DECLARE CURSOR
Time: 6.679 ms
postgres=# fetch 100 in a;
    uid    
-----------
  19246842
 118611240
 148504032
 185844649
(4 rows)
Time: 101.041 ms

We only found a few groups and they are not representative. We need to find more groups.

postgres=# begin;
BEGIN
postgres=# declare a cursor for select uid from test where s1 @> array[1] or s2 @> array[4];
DECLARE CURSOR
Time: 3.484 ms
postgres=# fetch 100 in a;
   uid   
---------
 2911941
 2373506
 .....
   29713
 3353782
 2836804
 1602067
(100 rows)
Time: 3.892 ms

postgres=# fetch 100 in a;
   uid   
---------
  384170
 1332271
 4282941
 ......
 1190946
 4524861
 1110635
(100 rows)
Time: 4.005 ms

2.Paging via a cursor, as mentioned above.

3.Stream return, as mentioned above.

4.Parallel batch return

In this step, the plug-in plproxy can be used to specify a parallel for each partition to implement parallel batch return. What are the best results we can expect?

For serial query, all shard tables are queried sequentially, so the operation can take quite a while, 113 ms for 15,221 target customer discoveries, for example.

postgres=# explain (analyze,verbose,timing,costs,buffers) select uid from test where s1 @> array[1];
                                                              QUERY PLAN                                                               
---------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..233541.24 rows=206876 width=4) (actual time=0.081..108.037 rows=15221 loops=1)
   Buffers: shared hit=60641
   ->  Seq Scan on public.test  (cost=0.00..0.00 rows=1 width=4) (actual time=0.001..0.001 rows=0 loops=1)
         Output: test.uid
         Filter: (test.s1 @> '{1}'::integer[])
   ->  Bitmap Heap Scan on public.test1  (cost=33.71..2901.56 rows=3188 width=4) (actual time=0.078..0.381 rows=242 loops=1)
         Output: test1.uid
         Recheck Cond: (test1.s1 @> '{1}'::integer[])
         Heap Blocks: exact=238
         Buffers: shared hit=243
         ->  Bitmap Index Scan on idx_test1_s1  (cost=0.00..32.91 rows=3188 width=0) (actual time=0.049..0.049 rows=242 loops=1)
               Index Cond: (test1.s1 @> '{1}'::integer[])
               Buffers: shared hit=5

... 62 tables in the middle are ignored.

   ->  Bitmap Heap Scan on public.test64  (cost=34.00..2935.31 rows=3225 width=4) (actual time=0.068..0.327 rows=214 loops=1)
         Output: test64.uid
         Recheck Cond: (test64.s1 @> '{1}'::integer[])
         Heap Blocks: exact=211
         Buffers: shared hit=216
         ->  Bitmap Index Scan on idx_test64_s1  (cost=0.00..33.19 rows=3225 width=0) (actual time=0.041..0.041 rows=214 loops=1)
               Index Cond: (test64.s1 @> '{1}'::integer[])
               Buffers: shared hit=5
 Planning time: 2.016 ms
 Execution time: 109.400 ms
(519 rows)
Time: 113.216 ms

Parallel query performance is equivalent to the time required for a single partition, usually about 1ms.

postgres=# explain (analyze,verbose,timing,costs,buffers) select uid from test1 where s1 @> array[1];
                                                        QUERY PLAN                                                         
---------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.test1  (cost=33.71..2901.56 rows=3188 width=4) (actual time=0.085..0.383 rows=242 loops=1)
   Output: uid
   Recheck Cond: (test1.s1 @> '{1}'::integer[])
   Heap Blocks: exact=238
   Buffers: shared hit=243
   ->  Bitmap Index Scan on idx_test1_s1  (cost=0.00..32.91 rows=3188 width=0) (actual time=0.051..0.051 rows=242 loops=1)
         Index Cond: (test1.s1 @> '{1}'::integer[])
         Buffers: shared hit=5
 Planning time: 0.097 ms
 Execution time: 0.423 ms
(10 rows)
Time: 1.011 ms

Therefore, parallel query can greatly improve overall performance.

References

Use Plproxy to Design a PostgreSQL Distributed Database

A Smart PostgreSQL Extension plproxy 2.2 Practices

PostgreSQL Best Practices - Vertical Database Shard (Based on plproxy)

However parallel queries are unnecessary if you intend to use stream return.

Sharding

When we are dealing with several billions of users, we can shard them by user ID and then use multiple hosts.

Of course, there's no need to use multiple hosts if your machine has enough space and CPU cores to meet your needs.

What methods can we use to implement multiple hosts? Refer to the following articles for a few simple and quick solutions.

You can think of the node postgres_fdw as the TDDL or DRDS of MySQL. It supports cross-node JOIN, and condition, order, and aggregation push down, making it just as convenient as TDDL/DRDS.

postgres_fdw is stateless, and only stores a structure (a distribution rule), making it easy to horizontally expand the node postgres_fdw.

PostgreSQL 9.6 Modularization, Sharding (Based on postgres_fdw) - Supporting Forward Pass on the Kernel Layer

PostgreSQL 9.6 Sharding + Modularization (Based on postgres_fdw) Best Practices - Design and Practices for General Database Sharding

PostgreSQL 9.6 Sharding Based on FDW & pg_pathman

PostgreSQL 9.5+ Efficient Implementation of Table Partitions - pg_pathman

Requirements and Performance for Target Customer Discovery

This requirement falls into the realm of PostgreSQL. Actually, it is simply a location-based KNN query. PostgreSQL can easily meet this requirement via GiST index.

After data sharding, PostgreSQL can still obtain results quickly via Merge Sort.

For example,

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from t order by id limit 10;
                                                              QUERY PLAN                                                               
---------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=0.72..1.13 rows=10 width=4) (actual time=0.158..0.165 rows=10 loops=1)
   Output: t.id
   Buffers: shared hit=3 read=4
   ->  Merge Append  (cost=0.72..819.74 rows=20001 width=4) (actual time=0.157..0.162 rows=10 loops=1)
         Sort Key: t.id
         Buffers: shared hit=3 read=4
         ->  Index Only Scan using idx on public.t  (cost=0.12..2.14 rows=1 width=4) (actual time=0.003..0.003 rows=0 loops=1)
               Output: t.id
               Heap Fetches: 0
               Buffers: shared hit=1
         ->  Index Only Scan using idx1 on public.t1  (cost=0.29..225.28 rows=10000 width=4) (actual time=0.107..0.107 rows=6 loops=1)
               Output: t1.id
               Heap Fetches: 6
               Buffers: shared hit=1 read=2
         ->  Index Only Scan using idx2 on public.t2  (cost=0.29..225.28 rows=10000 width=4) (actual time=0.043..0.044 rows=5 loops=1)
               Output: t2.id
               Heap Fetches: 5
               Buffers: shared hit=1 read=2
 Planning time: 0.181 ms
 Execution time: 0.219 ms
(20 rows)

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from t order by id ;
                                                             QUERY PLAN                                                              
-------------------------------------------------------------------------------------------------------------------------------------
 Merge Append  (cost=0.72..819.74 rows=20001 width=4) (actual time=0.043..10.324 rows=20000 loops=1)
   Sort Key: t.id
   Buffers: shared hit=149
   ->  Index Only Scan using idx on public.t  (cost=0.12..2.14 rows=1 width=4) (actual time=0.004..0.004 rows=0 loops=1)
         Output: t.id
         Heap Fetches: 0
         Buffers: shared hit=1
   ->  Index Only Scan using idx1 on public.t1  (cost=0.29..225.28 rows=10000 width=4) (actual time=0.021..3.266 rows=10000 loops=1)
         Output: t1.id
         Heap Fetches: 10000
         Buffers: shared hit=74
   ->  Index Only Scan using idx2 on public.t2  (cost=0.29..225.28 rows=10000 width=4) (actual time=0.017..3.309 rows=10000 loops=1)
         Output: t2.id
         Heap Fetches: 10000
         Buffers: shared hit=74
 Planning time: 0.175 ms
 Execution time: 11.791 ms
(17 rows)

Performance of PostgreSQL for Querying Neighboring Geographical Locations Among Tens of Billions of Data Points

A 12-core machine returns each request in about 0.8 ms. There are about 80,000 TPSs.

Summary

Let's get back to three core questions of the target customer system.

We did not go over precision in this article as it belongs more to the realm of the data mining system. We will address precision in a future article (Using the PostgreSQL, Greenplum MADlib Machine Learning Library).

The real-time requirement refers to the ability to update tags in real-time. The solution provided in this article is based on stream processing in the database, which, compared to external stream processing, takes up fewer resources, reduces development costs, improves development efficiency, and is more responsive.

As for efficiency, the solution provided here utilizes PostgreSQL along with GIN indexes of array data to achieve target customer discovery in a matter of milliseconds, even when dealing with trillions of user tags.

Related Articles

Application of Stream Processing for the "Internet of Things (IoT)" - Use PostgreSQL for Real-time Processing (Trillions Each Day)

For the Tribe - How to Leverage PostgreSQL to Pair Genes to Improve Future Generations

Ever-present StreamCompute - Combining PostgreSQL and PipelineDB for IoT

Secrets to Analysis Acceleration Engines - LLVM, Column-Store, Multi-core Parallel, Operator Multiplex Integrate - Open the Treasure Chest of PostgreSQL

Analysis on Requirements for Finance Risk Control, Police Criminal Detection, Social Relation, Networking Analysis, and Database Implementation - PostgreSQL Graph Database Scenario Application

Real-time Data Exchange Platform - BottledWater-pg with Confluent

Application of PostgreSQL in Video and Image De-duplication and Image Search Services

Create Real-time User Portrait Recommendation System Based on Alibaba Cloud RDS PostgreSQL

PostgreSQL and Thoughts on Winning Train Tickets at 12306

Access Control Advertisement Sales System Requirement Analysis and PostgreSQL Database Implementation

Technology used in Double 11 Shopping Events - Logistics and Dynamic Path Planning

Technology used in Double 11 Shopping Events - Word Division and Search

Technology used in Double 11 Shopping Events - Sniping Technology, Down to the Second

Technology used in Double 11 Shopping Events - Millisecond Word Division, Test Regularity and Similarity

PostgreSQL 9.6 Guides Open-source Database to Tackle Multi-core Parallel Computing Puzzles

PostgreSQL's Past and Present

How to Create a GIS Test Environment - Import OpenStreetMap Sample Data into PostgreSQL PostGIS Library

PostgreSQL 9.6 Modularization, Sharding (Based on postgres_fdw) - Supporting Forward Pass on the Kernel Layer

PostgreSQL 9.6 Sharding + Modularization (Based on postgres_fdw) Best Practices - Design and Practices for General Database Sharding

PostgreSQL 9.6 Sharding Based on FDW & pg_pathman

PostgreSQL 9.5+ Efficient Implementation of Table Partitions - pg_pathman

PostgreSQL Database Security Guide

Secrets of PostgreSQL 9.6 – A Single Bloom Algorithm Index to Support Query for Any Column Combination

PostgreSQL Uses Recursive SQL to Find Dependency among Database Objects

Use PostgreSQL to Describe One's Life's Climax, Urine Point, and Low Point - Window/Frame or Slope/Derivative/Curvature/Calculus?

Use PostgreSQL to Make up 618 s of Lost Time - Convergence of Recursive Optimization

Advantages of PostGIS in O2O Application

Performance of PostgreSQL for Querying Neighboring Geographical Locations Among Tens of Billions of Data Points

Use Plproxy to Design a PostgreSQL Distributed Database

A Smart PostgreSQL Extension plproxy 2.2 Practices

PostgreSQL Best Practices - Vertical Database Shard (Based on plproxy)

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
6月前
|
人工智能 关系型数据库 数据库
PostgreSQL 常见问题解决方案 - ERROR: database is being accessed by other users
PostgreSQL 常见问题解决方案 - ERROR: database is being accessed by other users
|
6月前
Due to limitations of the com.mongodb.BasicDocument, you can‘t add a second ‘_id‘ criteria. Query al
Due to limitations of the com.mongodb.BasicDocument, you can‘t add a second ‘_id‘ criteria. Query al
147 2
|
关系型数据库 PostgreSQL
postgresql通过select结果进行update
postgresql通过select结果进行update
115 0
|
JSON NoSQL MongoDB
DT MongoDB Plug -in description(2)
DT MongoDB Plug -in description(2)
84 0
|
JSON NoSQL MongoDB
DT MongoDB Plug -in description(1)
DT MongoDB Plug -in description(1)
87 0
|
关系型数据库 MySQL
MySQL练习16——查找收视率Audience rating最高值
MySQL练习16——查找收视率Audience rating最高值!
|
关系型数据库 数据库 PostgreSQL
postgresql :ERROR: role “user001“ cannot be dropped because some objects depend on it
postgresql :ERROR: role “user001“ cannot be dropped because some objects depend on it
893 0
|
存储 Oracle 安全
PostgreSQL user、db、schema、权限
本文探讨 PostgreSQL中的 user、db、schema、权限
6444 0
PostgreSQL user、db、schema、权限
|
消息中间件 SQL 关系型数据库
【DB吐槽大会】第69期 - PG 不支持update | delete limit语法
大家好,这里是DB吐槽大会,第69期 - PG 不支持update | delete limit语法