diff --git a/Makefile b/Makefile
index c438529..4240bf6 100644
--- a/Makefile
+++ b/Makefile
@@ -4,10 +4,10 @@ MonetDB_INC =
Defines =
CXXFLAGS = --std=c++2a
ifeq ($(AQ_DEBUG), 1)
- OPTFLAGS = -g3 -fsanitize=address -fsanitize=leak
+ OPTFLAGS = -g3 #-fsanitize=address
LINKFLAGS =
else
- OPTFLAGS = -O3 -DNDEBUG -fno-stack-protector
+ OPTFLAGS = -Ofast -DNDEBUG -fno-stack-protector
LINKFLAGS = -flto -s
endif
SHAREDFLAGS = -shared
diff --git a/README.md b/README.md
index ef96a71..3624a73 100644
--- a/README.md
+++ b/README.md
@@ -1,31 +1,20 @@
-
# AQuery++ Database
-
-### Please try the latest code in dev branch if you encounter any problem. Use `git checkout dev` to switch branches.
-
## Introduction
AQuery++ Database is a cross-platform, In-Memory Column-Store Database that incorporates compiled query execution. (**Note**: If you encounter any problems, feel free to contact me via ys3540@nyu.edu)
+# Architecture
+![Architecture](./docs/arch-hybrid.svg)
-## Docker (Recommended):
- - See installation instructions from [docker.com](https://www.docker.com). Run **docker desktop** to start docker engine.
- - In AQuery root directory, type `make docker` to build the docker image from scratch.
- - For Arm-based Mac users, you would have to build and run the **x86_64** docker image because MonetDB doesn't offer official binaries for arm64 Linux. (Run `docker buildx build --platform=linux/amd64 -t aquery .` instead of `make docker`)
- - Finally run the image in **interactive** mode (`docker run --name aquery -it aquery`)
- - When you need to access the container again run `docker start -ai aquery`
- - If there is a need to access the system shell within AQuery, type `dbg` to activate python interpreter and type `os.system('sh')` to launch a shell.
- - Docker image is available on [Docker Hub](https://hub.docker.com/repository/docker/sunyinqi0508/aquery) but building image yourself is highly recommended (see [#2](../../issues/2))
-## CIMS Computer Lab (Only for NYU affiliates who have access)
- 1. Clone this git repo in CIMS.
- 2. Download the [patch](https://drive.google.com/file/d/1YkykhM6u0acZ-btQb4EUn4jAEXPT81cN/view?usp=sharing)
- 3. Decompress the patch to any directory and execute script inside by typing (`source ./cims.sh`). Please use the source command or `. ./cims.sh` (dot space) to execute the script because it contains configurations for environment variables. Also note that this script can only work with bash and compatible shells (e.g. dash, zsh. but not csh)
- 4. Execute `python3 ./prompt.py`
+## AQuery Compiler
+- The query is first processed by the AQuery Compiler which is composed of a frontend that parses the query into AST and a backend that generates target code that delivers the query.
+- Front end of AQuery++ Compiler is built on top of [mo-sql-parsing](https://github.com/klahnakoski/mo-sql-parsing) with modifications to handle AQuery dialect and extension.
+- Backend of AQuery++ Compiler generates target code dependent on the Execution Engine. It can either be the C++ code for AQuery Execution Engine or sql and C++ post-processor for Hybrid Engine or k9 for the k9 Engine.
+## Execution Engines
+- AQuery++ supports different execution engines thanks to the decoupled compiler structure.
+- Hybrid Execution Engine: decouples the query into two parts. The sql-compliant part is executed by an Embedded version of Monetdb and everything else is executed by a post-process module which is generated by AQuery++ Compiler in C++ and then compiled and executed.
+- AQuery Library: A set of header based libraries that provide column arithmetic and operations inspired by array programming languages like kdb. This library is used by C++ post-processor code which can significantly reduce the complexity of generated code, reducing compile time while maintaining the best performance. The set of libraries can also be used by UDFs as well as User modules which makes it easier for users to write simple, efficient yet powerful extensions.
-## Singularity Container
- 1. build container `singularity build aquery.sif aquery.def`
- 2. execute container `singularity exec aquery.sif sh`
- 3. run AQuery `python3 ./prompt.py`
-# Native Installation:
+# Installation:
## Requirements
1. Recent version of Linux, Windows or MacOS, with recent C++ compiler that has C++17 (1z) support. (however c++20 is recommended if available for heterogeneous lookup on unordered containers)
- GCC: 9.0 or above (g++ 7.x, 8.x fail to handle fold-expressions due to a compiler bug)
@@ -38,10 +27,6 @@ AQuery++ Database is a cross-platform, In-Memory Column-Store Database that inco
- On MacOS, Monetdb can be easily installed in homebrew `brew install monetdb`.
3. Python 3.6 or above and install required packages in requirements.txt by `python3 -m pip install -r requirements.txt`
-
-## Installation
-AQuery is tested on mainstream operating systems such as Windows, macOS and Linux
-
### Windows
There're multiple options to run AQuery on Windows. But for better consistency I recommend using a simulated Linux environment such as **Windows Subsystem for Linux** (1 or 2), **Docker** or **Linux Virtual Machines**. You can also use the native toolchain from Microsoft Visual Studio or gcc from Winlabs/Cygwin/MinGW.
@@ -97,7 +82,24 @@ There're multiple options to run AQuery on Windows. But for better consistency I
In this case, upgrade anaconda or your compiler or use the python from your OS or package manager instead. Or (**NOT recommended**) copy/link the library from your system (e.g. /usr/lib/x86_64-linux-gnu/libstdc++.so.6) to anaconda's library directory (e.g. ~/Anaconda3/lib/).
+## Docker (Recommended):
+ - See installation instructions from [docker.com](https://www.docker.com). Run **docker desktop** to start docker engine.
+ - In AQuery root directory, type `make docker` to build the docker image from scratch.
+ - For Arm-based Mac users, you would have to build and run the **x86_64** docker image because MonetDB doesn't offer official binaries for arm64 Linux. (Run `docker buildx build --platform=linux/amd64 -t aquery .` instead of `make docker`)
+ - Finally run the image in **interactive** mode (`docker run --name aquery -it aquery`)
+ - When you need to access the container again run `docker start -ai aquery`
+ - If there is a need to access the system shell within AQuery, type `dbg` to activate python interpreter and type `os.system('sh')` to launch a shell.
+ - Docker image is available on [Docker Hub](https://hub.docker.com/repository/docker/sunyinqi0508/aquery) but building image yourself is highly recommended (see [#2](../../issues/2))
+## CIMS Computer Lab (Only for NYU affiliates who have access)
+ 1. Clone this git repo in CIMS.
+ 2. Download the [patch](https://drive.google.com/file/d/1YkykhM6u0acZ-btQb4EUn4jAEXPT81cN/view?usp=sharing)
+ 3. Decompress the patch to any directory and execute script inside by typing (`source ./cims.sh`). Please use the source command or `. ./cims.sh` (dot space) to execute the script because it contains configurations for environment variables. Also note that this script can only work with bash and compatible shells (e.g. dash, zsh. but not csh)
+ 4. Execute `python3 ./prompt.py`
+## Singularity Container
+ 1. build container `singularity build aquery.sif aquery.def`
+ 2. execute container `singularity exec aquery.sif sh`
+ 3. run AQuery `python3 ./prompt.py`
# Usage
`python3 prompt.py` will launch the interactive command prompt. The server binary will be automatically rebuilt and started.
### Commands:
@@ -268,17 +270,6 @@ SELECT * FROM my_table WHERE c1 > 10
- `sqrt(x), trunc(x), and other builtin math functions`: value-wise math operations. `sqrt(x)[i] = sqrt(x[i])`
- `pack(cols, ...)`: pack multiple columns with exact same type into a single column.
-# Architecture
-![Architecture](./docs/arch-hybrid.svg)
-
-## AQuery Compiler
-- The query is first processed by the AQuery Compiler which is composed of a frontend that parses the query into AST and a backend that generates target code that delivers the query.
-- Front end of AQuery++ Compiler is built on top of [mo-sql-parsing](https://github.com/klahnakoski/mo-sql-parsing) with modifications to handle AQuery dialect and extension.
-- Backend of AQuery++ Compiler generates target code dependent on the Execution Engine. It can either be the C++ code for AQuery Execution Engine or sql and C++ post-processor for Hybrid Engine or k9 for the k9 Engine.
-## Execution Engines
-- AQuery++ supports different execution engines thanks to the decoupled compiler structure.
-- Hybrid Execution Engine: decouples the query into two parts. The sql-compliant part is executed by an Embedded version of Monetdb and everything else is executed by a post-process module which is generated by AQuery++ Compiler in C++ and then compiled and executed.
-- AQuery Library: A set of header based libraries that provide column arithmetic and operations inspired by array programming languages like kdb. This library is used by C++ post-processor code which can significantly reduce the complexity of generated code, reducing compile time while maintaining the best performance. The set of libraries can also be used by UDFs as well as User modules which makes it easier for users to write simple but powerful extensions.
# Roadmap
- [x] SQL Parser -> AQuery Parser (Front End)
diff --git a/aquery_config.py b/aquery_config.py
index 094bc47..df2511a 100644
--- a/aquery_config.py
+++ b/aquery_config.py
@@ -2,7 +2,7 @@
## GLOBAL CONFIGURATION FLAGS
-version_string = '0.5.4a'
+version_string = '0.6.0a'
add_path_to_ldpath = True
rebuild_backend = False
run_backend = True
diff --git a/benchmark/quries/Aquery/load_data.a b/benchmark/quries/Aquery/load_data.a
new file mode 100644
index 0000000..54bc36f
--- /dev/null
+++ b/benchmark/quries/Aquery/load_data.a
@@ -0,0 +1,6 @@
+CREATE TABLE trade01m(stocksymbol STRING, time INT, quantity INT, price INT)
+load data infile "../tables/trade01m.csv" into table trade01m fields terminated by ','
+CREATE TABLE trade1m(stocksymbol STRING, time INT, quantity INT, price INT)
+load data infile "../tables/trade1m.csv" into table trade1m fields terminated by ','
+CREATE TABLE trade10m(stocksymbol STRING, time INT, quantity INT, price INT)
+load data infile "../tables/trade10m.csv" into table trade10m fields terminated by ','
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q0.a b/benchmark/quries/Aquery/q0.a
new file mode 100644
index 0000000..a18deec
--- /dev/null
+++ b/benchmark/quries/Aquery/q0.a
@@ -0,0 +1,5 @@
+-- select rows
+
+CREATE TABLE res0 AS
+SELECT * FROM trade10m
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q1.a b/benchmark/quries/Aquery/q1.a
new file mode 100644
index 0000000..f3077a9
--- /dev/null
+++ b/benchmark/quries/Aquery/q1.a
@@ -0,0 +1,7 @@
+-- groupby_multi_different_functions
+
+CREATE TABLE res1 AS
+SELECT avg(quantity) AS avg_quan, min(price) AS min_p
+FROM trade1m
+GROUP BY stocksymbol, time
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q10.a b/benchmark/quries/Aquery/q10.a
new file mode 100644
index 0000000..8c891ba
--- /dev/null
+++ b/benchmark/quries/Aquery/q10.a
@@ -0,0 +1,4 @@
+SELECT stocksymbol, MAX(stddevs(3, price))
+FROM trade1m
+ASSUMING ASC time
+GROUP BY stocksymbol
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q2.a b/benchmark/quries/Aquery/q2.a
new file mode 100644
index 0000000..28e6368
--- /dev/null
+++ b/benchmark/quries/Aquery/q2.a
@@ -0,0 +1,4 @@
+-- count values
+
+SELECT COUNT(*) FROM trade10m
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q3.a b/benchmark/quries/Aquery/q3.a
new file mode 100644
index 0000000..c6f7a5b
--- /dev/null
+++ b/benchmark/quries/Aquery/q3.a
@@ -0,0 +1,7 @@
+-- group by multiple keys
+
+create table res3 AS
+SELECT sum(quantity) as sum_quantity
+FROM trade01m
+GROUP BY stocksymbol, price
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q4.a b/benchmark/quries/Aquery/q4.a
new file mode 100644
index 0000000..bab175f
--- /dev/null
+++ b/benchmark/quries/Aquery/q4.a
@@ -0,0 +1,5 @@
+-- append tables
+
+CREATE TABLE res4 AS
+SELECT * FROM trade10m UNION ALL SELECT * FROM trade10m
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q7.a b/benchmark/quries/Aquery/q7.a
new file mode 100644
index 0000000..7e384c8
--- /dev/null
+++ b/benchmark/quries/Aquery/q7.a
@@ -0,0 +1,5 @@
+CREATE table res7 AS
+SELECT stocksymbol, avgs(5, price)
+FROM trade10m
+ASSUMING ASC time
+GROUP BY stocksymbol
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q8.a b/benchmark/quries/Aquery/q8.a
new file mode 100644
index 0000000..6642520
--- /dev/null
+++ b/benchmark/quries/Aquery/q8.a
@@ -0,0 +1,6 @@
+
+CREATE TABLE res8 AS
+SELECT stocksymbol, quantity, price
+FROM trade10m
+WHERE time >= 5288 and time <= 7000
+
\ No newline at end of file
diff --git a/benchmark/quries/Aquery/q9.a b/benchmark/quries/Aquery/q9.a
new file mode 100644
index 0000000..7348b8e
--- /dev/null
+++ b/benchmark/quries/Aquery/q9.a
@@ -0,0 +1,6 @@
+
+CREATE TABLE res9 AS
+SELECT stocksymbol, MAX(price) - MIN(price)
+FROM trade10m
+GROUP BY stocksymbol
+
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q0 b/benchmark/quries/Clickhouse/q0
new file mode 100644
index 0000000..e06e534
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q0
@@ -0,0 +1,3 @@
+-- q0 select rows
+CREATE TABLE res0 (a String, b Int32, c Int32, d Int32) ENGINE = MergeTree() ORDER BY b AS
+SELECT * FROM benchmark.trade10m
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q1 b/benchmark/quries/Clickhouse/q1
new file mode 100644
index 0000000..21ef83b
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q1
@@ -0,0 +1,4 @@
+-- groupby_multi_different_functions
+SELECT avg(quantity), min(price)
+FROM benchmark.trade10m
+GROUP BY stocksymbol, time
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q10 b/benchmark/quries/Clickhouse/q10
new file mode 100644
index 0000000..c251cb6
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q10
@@ -0,0 +1,8 @@
+-- max rolling std
+select
+ stocksymbol,
+ max(stddevPop(price)) over
+ (partition by stocksymbol rows between 2 preceding AND CURRENT row) as maxRollingStd
+from
+(SELECT * FROM benchmark.trade01m ORDER BY time)
+GROUP BY stocksymbol
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q2 b/benchmark/quries/Clickhouse/q2
new file mode 100644
index 0000000..1267934
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q2
@@ -0,0 +1,2 @@
+-- count values
+SELECT COUNT(*) FROM benchmark.trade10m
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q3 b/benchmark/quries/Clickhouse/q3
new file mode 100644
index 0000000..79ea85e
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q3
@@ -0,0 +1,4 @@
+-- group by multiple keys
+SELECT sum(quantity)
+FROM benchmark.trade10m
+GROUP BY stocksymbol, price
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q4 b/benchmark/quries/Clickhouse/q4
new file mode 100644
index 0000000..016f3fc
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q4
@@ -0,0 +1,2 @@
+-- append two tables
+SELECT * FROM benchmark.trade10m UNION ALL SELECT * FROM benchmark.trade10m
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q7 b/benchmark/quries/Clickhouse/q7
new file mode 100644
index 0000000..ed57058
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q7
@@ -0,0 +1,5 @@
+-- moving_avg
+SELECT stocksymbol, groupArrayMovingAvg(5)(price) AS moving_avg_price
+FROM
+(SELECT * FROM benchmark.trade01m ORDER BY time)
+GROUP BY stocksymbol
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q8 b/benchmark/quries/Clickhouse/q8
new file mode 100644
index 0000000..550abbd
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q8
@@ -0,0 +1,3 @@
+SELECT stocksymbol, quantity, price
+FROM benchmark.trade10m
+WHERE time >= 5288 and time <= 7000
\ No newline at end of file
diff --git a/benchmark/quries/Clickhouse/q9 b/benchmark/quries/Clickhouse/q9
new file mode 100644
index 0000000..48312c9
--- /dev/null
+++ b/benchmark/quries/Clickhouse/q9
@@ -0,0 +1,3 @@
+SELECT stocksymbol, MAX(price) - MIN(price)
+FROM benchmark.trade1m
+GROUP BY stocksymbol
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q0 b/benchmark/quries/Timescaledb/q0
new file mode 100644
index 0000000..b6dec8f
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q0
@@ -0,0 +1,3 @@
+-- select rows
+CREATE TABLE res0 AS
+SELECT * FROM trade10m;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q1 b/benchmark/quries/Timescaledb/q1
new file mode 100644
index 0000000..0ac4c46
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q1
@@ -0,0 +1,4 @@
+-- groupby_multi_different_functions
+SELECT avg(quantity), min(price)
+FROM trade10m
+GROUP BY stocksymbol, time;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q10 b/benchmark/quries/Timescaledb/q10
new file mode 100644
index 0000000..6d4b326
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q10
@@ -0,0 +1,7 @@
+select
+ stocksymbol,
+ max(stddev(price)) over
+ (partition by stocksymbol rows between 2 preceding AND CURRENT row) as maxRollingStd
+from
+(SELECT * FROM trade01m ORDER BY time) as t
+GROUP BY stocksymbol;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q2 b/benchmark/quries/Timescaledb/q2
new file mode 100644
index 0000000..b1f00f6
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q2
@@ -0,0 +1,2 @@
+-- count values
+SELECT COUNT(*) FROM trade10m;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q3 b/benchmark/quries/Timescaledb/q3
new file mode 100644
index 0000000..0176182
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q3
@@ -0,0 +1,4 @@
+-- group by multiple keys
+SELECT sum(quantity)
+FROM trade10m
+GROUP BY stocksymbol, price;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q4 b/benchmark/quries/Timescaledb/q4
new file mode 100644
index 0000000..a3e7f14
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q4
@@ -0,0 +1,2 @@
+-- append tables
+SELECT * FROM trade10m UNION ALL SELECT * FROM trade10m;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q7 b/benchmark/quries/Timescaledb/q7
new file mode 100644
index 0000000..c0aa976
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q7
@@ -0,0 +1,5 @@
+select
+ stocksymbol,
+ coalesce(avg(price) over
+ (partition by stocksymbol order by time rows between 4 preceding AND CURRENT row), price) as rollingAvg
+from trade10m;
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q8 b/benchmark/quries/Timescaledb/q8
new file mode 100644
index 0000000..db6be13
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q8
@@ -0,0 +1,3 @@
+SELECT stocksymbol, quantity, price
+FROM trade01m
+WHERE time >= 5288 and time <= 7000
\ No newline at end of file
diff --git a/benchmark/quries/Timescaledb/q9 b/benchmark/quries/Timescaledb/q9
new file mode 100644
index 0000000..e8c0b92
--- /dev/null
+++ b/benchmark/quries/Timescaledb/q9
@@ -0,0 +1,3 @@
+SELECT stocksymbol, MAX(price) - MIN(price)
+FROM trade01m
+GROUP BY stocksymbol;
\ No newline at end of file
diff --git a/build.py b/build.py
index ec59122..e8c5255 100644
--- a/build.py
+++ b/build.py
@@ -117,7 +117,7 @@ class build_manager:
else:
mgr.cxx = os.environ['CXX']
if 'AQ_DEBUG' not in os.environ:
- os.environ['AQ_DEBUG'] = '0' if mgr.OptimizationLv else '1'
+ os.environ['AQ_DEBUG'] = ('0' if mgr.OptimizationLv != '0' else '1')
def libaquery_a(self):
self.build_cmd = [['rm', 'libaquery.a'],['make', 'libaquery']]
@@ -184,7 +184,7 @@ class build_manager:
def __init__(self) -> None:
self.method = 'make'
self.cxx = ''
- self.OptimizationLv = '0' # [O0, O1, O2, O3, Ofast]
+ self.OptimizationLv = '4' # [O0, O1, O2, O3, Ofast]
self.Platform = 'amd64'
self.PCH = os.environ['PCH'] if 'PCH' in os.environ else 1
self.StaticLib = 1
diff --git a/datagen.cpp b/datagen.cpp
index c96b480..41fbfc0 100644
--- a/datagen.cpp
+++ b/datagen.cpp
@@ -80,7 +80,7 @@ int gen_trade_data(int argc, char* argv[])
memmove(p + lens[i], p + lens[0], (lens[i - 1] - lens[i]) * sizeof(int));
permutation(p, lens[0] + N);
// for (int i = 0; i < lens[0] + N; ++i) printf("%d ", p[i]);
- FILE* fp = fopen("trade.csv", "w");
+ FILE* fp = fopen("trade.csv", "wb");
int* last_price = new int[N];
memset(last_price, -1, sizeof(int) * N);
fprintf(fp, "stocksymbol, time, quantity, price\n");
@@ -131,7 +131,7 @@ int gen_stock_data(int argc, char* argv[]){
}
IDs[n_stocks] = "S";
names[n_stocks] = "x";
- FILE* fp = fopen("./data/stock.csv", "w");
+ FILE* fp = fopen("./data/stock.csv", "wb");
fprintf(fp, "ID, timestamp, tradeDate, price\n");
char date_str_buf [types::date_t::string_length()];
int* timestamps = new int[n_data];
@@ -142,7 +142,7 @@ int gen_stock_data(int argc, char* argv[]){
fprintf(fp, "%s,%d,%s,%d\n", IDs[ui(engine)%(n_stocks + 1)].c_str(), timestamps[i], date, ui(engine) % 1000);
}
fclose(fp);
- fp = fopen("./data/base.csv", "w");
+ fp = fopen("./data/base.csv", "wb");
fprintf(fp, "ID, name\n");
for(int i = 0; i < n_stocks + 1; ++ i){
fprintf(fp, "%s,%s\n", IDs[i].c_str(), names[i].c_str());
diff --git a/engine/ddl.py b/engine/ddl.py
index 9ba41db..06eb9f0 100644
--- a/engine/ddl.py
+++ b/engine/ddl.py
@@ -110,7 +110,7 @@ class outfile(ast_node):
filename = node['loc']['literal'] if 'loc' in node else node['literal']
sep = ',' if 'term' not in node else node['term']['literal']
file_pointer = 'fp_' + base62uuid(6)
- self.emit(f'FILE* {file_pointer} = fopen("{filename}", "w");')
+ self.emit(f'FILE* {file_pointer} = fopen("{filename}", "wb");')
self.emit(f'{out_table.cxt_name}->printall("{sep}", "\\n", nullptr, {file_pointer});')
self.emit(f'fclose({file_pointer});')
# self.context.headers.add('fstream')
diff --git a/engine/types.py b/engine/types.py
index 5baf47f..46b750b 100644
--- a/engine/types.py
+++ b/engine/types.py
@@ -107,9 +107,9 @@ ULongT = Types(8, name = 'uint64', sqlname = 'UINT64', fp_type=DoubleT)
UIntT = Types(7, name = 'uint32', sqlname = 'UINT32', long_type=ULongT, fp_type=FloatT)
UShortT = Types(6, name = 'uint16', sqlname = 'UINT16', long_type=ULongT, fp_type=FloatT)
UByteT = Types(5, name = 'uint8', sqlname = 'UINT8', long_type=ULongT, fp_type=FloatT)
-StrT = Types(200, name = 'str', cname = 'const char*', sqlname='TEXT', ctype_name = 'types::ASTR')
-TextT = Types(200, name = 'text', cname = 'const char*', sqlname='TEXT', ctype_name = 'types::ASTR')
-VarcharT = Types(200, name = 'varchar', cname = 'const char*', sqlname='VARCHAR', ctype_name = 'types::ASTR')
+StrT = Types(200, name = 'str', cname = 'string_view', sqlname='TEXT', ctype_name = 'types::ASTR')
+TextT = Types(200, name = 'text', cname = 'string_view', sqlname='TEXT', ctype_name = 'types::ASTR')
+VarcharT = Types(200, name = 'varchar', cname = 'string_view', sqlname='VARCHAR', ctype_name = 'types::ASTR')
VoidT = Types(200, name = 'void', cname = 'void', sqlname='Null', ctype_name = 'types::None')
class VectorT(Types):
@@ -305,7 +305,7 @@ opor = OperatorBase('or', 2, logical, cname = '||', sqlname = ' OR ', call = bin
opxor = OperatorBase('xor', 2, logical, cname = '^', sqlname = ' XOR ', call = binary_op_behavior)
opgt = OperatorBase('gt', 2, logical, cname = '>', sqlname = '>', call = binary_op_behavior)
oplt = OperatorBase('lt', 2, logical, cname = '<', sqlname = '<', call = binary_op_behavior)
-opge = OperatorBase('gte', 2, logical, cname = '>=', sqlname = '>=', call = binary_op_behavior)
+opgte = OperatorBase('gte', 2, logical, cname = '>=', sqlname = '>=', call = binary_op_behavior)
oplte = OperatorBase('lte', 2, logical, cname = '<=', sqlname = '<=', call = binary_op_behavior)
opneq = OperatorBase('neq', 2, logical, cname = '!=', sqlname = '!=', call = binary_op_behavior)
opeq = OperatorBase('eq', 2, logical, cname = '==', sqlname = '=', call = binary_op_behavior)
@@ -355,19 +355,27 @@ fnpow = OperatorBase('pow', 2, lambda *_ : DoubleT, cname = 'pow', sqlname = 'PO
# type collections
def _op_make_dict(*items : OperatorBase):
return { i.name: i for i in items}
+#binary op
builtin_binary_arith = _op_make_dict(opadd, opdiv, opmul, opsub, opmod)
builtin_binary_logical = _op_make_dict(opand, opor, opxor, opgt, oplt,
- opge, oplte, opneq, opeq)
+ opgte, oplte, opneq, opeq)
+builtin_binary_ops = {**builtin_binary_arith, **builtin_binary_logical}
+#unary op
builtin_unary_logical = _op_make_dict(opnot)
builtin_unary_arith = _op_make_dict(opneg)
builtin_unary_special = _op_make_dict(spnull, opdistinct)
+# functions
builtin_cstdlib = _op_make_dict(fnsqrt, fnlog, fnsin, fncos, fntan, fnpow)
-builtin_func = _op_make_dict(fnmax, fnmin, fnsum, fnavg, fnmaxs,
- fnmins, fndeltas, fnratios, fnlast,
- fnfirst, fnsums, fnavgs, fncnt,
- fnpack, fntrunc, fnprev, fnnext,
- fnvar, fnvars, fnstd, fnstds)
+builtin_aggfunc = _op_make_dict(fnmax, fnmin, fnsum, fnavg,
+ fnlast, fnfirst, fncnt, fnvar, fnstd)
+builtin_vecfunc = _op_make_dict(fnmaxs,
+ fnmins, fndeltas, fnratios, fnsums, fnavgs,
+ fnpack, fntrunc, fnprev, fnnext, fnvars, fnstds)
+builtin_vecfunc = {**builtin_vecfunc, **builtin_cstdlib}
+builtin_func = {**builtin_vecfunc, **builtin_aggfunc}
+
user_module_func = {}
+
builtin_operators : Dict[str, OperatorBase] = {**builtin_binary_arith, **builtin_binary_logical,
**builtin_unary_arith, **builtin_unary_logical, **builtin_unary_special, **builtin_func, **builtin_cstdlib,
**user_module_func}
diff --git a/engine/utils.py b/engine/utils.py
index 8e65fcd..59b1309 100644
--- a/engine/utils.py
+++ b/engine/utils.py
@@ -157,4 +157,4 @@ def get_innermost(sl):
elif sl and type(sl) is list:
return get_innermost(sl[0])
else:
- return sl
\ No newline at end of file
+ return sl
diff --git a/header.cxx b/header.cxx
index 612b4a9..73bdce1 100644
--- a/header.cxx
+++ b/header.cxx
@@ -5,6 +5,7 @@
#include "./server/gc.h"
__AQEXPORT__(void) __AQ_Init_GC__(Context* cxt) {
GC::gc_handle = static_cast(cxt->gc);
+ GC::scratch_space = nullptr;
}
#else // __AQ_USE_THREADEDGC__
diff --git a/mem_opt.cpp b/mem_opt.cpp
new file mode 100644
index 0000000..452fedd
--- /dev/null
+++ b/mem_opt.cpp
@@ -0,0 +1,72 @@
+#include "./server/libaquery.h"
+
+#ifndef __AQ_USE_THREADEDGC__
+
+#include "./server/gc.h"
+__AQEXPORT__(void) __AQ_Init_GC__(Context* cxt) {
+ GC::gc_handle = static_cast(cxt->gc);
+}
+
+#else // __AQ_USE_THREADEDGC__
+#define __AQ_Init_GC__(x)
+#endif // __AQ_USE_THREADEDGC__
+#include "./server/hasher.h"
+#include "./server/monetdb_conn.h"
+#include "./server/aggregations.h"
+
+__AQEXPORT__(int) dll_2Cxoox(Context* cxt) {
+ using namespace std;
+ using namespace types;
+ auto server = static_cast(cxt->alt_server);
+auto len_4ycjiV = server->cnt;
+auto mont_8AE = ColRef(len_4ycjiV, server->getCol(0));
+auto sales_2RB = ColRef(len_4ycjiV, server->getCol(1));
+const char* names_6pIt[] = {"mont", "minw2ysales"};
+auto out_2LuaMH = new TableInfo>("out_2LuaMH", names_6pIt);
+decltype(auto) col_EeW23s = out_2LuaMH->get_col<0>();
+decltype(auto) col_5gY1Dm = out_2LuaMH->get_col<1>();
+typedef record> record_typegj3e8Xf;
+ankerl::unordered_dense::map> gMzMTEvd;
+gMzMTEvd.reserve(mont_8AE.size);
+uint32_t* reversemap = new uint32_t[mont_8AE.size<<1],
+ *mapbase = reversemap + mont_8AE.size;
+for (uint32_t i2E = 0; i2E < mont_8AE.size; ++i2E){
+ reversemap[i2E] = gMzMTEvd.hashtable_push(forward_as_tuple(mont_8AE[i2E]));
+}
+auto arr_values = gMzMTEvd.values().data();
+auto arr_len = gMzMTEvd.size();
+uint32_t* seconds = new uint32_t[gMzMTEvd.size()];
+
+auto vecs = static_cast*>(malloc(sizeof(vector_type) * arr_len));
+vecs[0].init_from(arr_values[0].second, mapbase);
+for (uint32_t i = 1; i < arr_len; ++i) {
+ vecs[i].init_from(arr_values[i].second, mapbase + arr_values[i - 1].second);
+ arr_values[i].second += arr_values[i - 1].second;
+}
+for (uint32_t i = 0; i < mont_8AE.size; ++i) {
+ auto id = reversemap[i];
+ mapbase[--arr_values[id].second] = i;
+}
+
+col_EeW23s.reserve(gMzMTEvd.size());
+col_5gY1Dm.reserve(gMzMTEvd.size());
+auto buf_col_5gY1Dm = new double[mont_8AE.size];
+for (uint32_t i = 0; i < arr_len; ++i) {
+ col_5gY1Dm[i].init_from(vecs[i].size, buf_col_5gY1Dm + arr_values[i].second);
+}
+for (uint32_t i = 0; i < arr_len; ++i) {
+
+auto &key_3iNX3qG = arr_values[i].first;
+auto &val_7jjv8Mo = arr_values[i].second;
+col_EeW23s.emplace_back(get<0>(key_3iNX3qG));
+
+avgw(10, sales_2RB[vecs[i]], col_5gY1Dm[i]);
+
+}
+//print(*out_2LuaMH);
+//FILE* fp_5LQeym = fopen("flatten.csv", "wb");
+out_2LuaMH->printall(",", "\n", nullptr, nullptr, 10);
+//fclose(fp_5LQeym);
+puts("done.");
+return 0;
+}
\ No newline at end of file
diff --git a/proctool.py b/proctool.py
new file mode 100644
index 0000000..1ff726c
--- /dev/null
+++ b/proctool.py
@@ -0,0 +1,51 @@
+import struct
+import readline
+from typing import List
+
+name : str = input('Filename (in path ./procedures/.aqp):')
+
+def write():
+ s : str = input()
+ qs : List[str] = []
+
+ while(len(s) and not s.startswith('S')):
+ qs.append(s)
+ s = input()
+
+ ms : int = int(input())
+
+ with open(f'./procedures/{name}.aqp', 'wb') as fp:
+ fp.write(struct.pack("I", len(qs) + (ms > 0)))
+ fp.write(struct.pack("I", ms))
+ if (ms > 0):
+ fp.write(b'N\x00')
+
+ for q in qs:
+ fp.write(q.encode('utf-8'))
+ if q.startswith('Q'):
+ fp.write(b'\n ')
+ fp.write(b'\x00')
+
+
+def read():
+ with open(f'./procedures/{name}.aqp', 'rb') as fp:
+ nq = struct.unpack("I", fp.read(4))[0]
+ ms = struct.unpack("I", fp.read(4))[0]
+ qs = fp.read().split(b'\x00')
+ print(f'Procedure {name}, {nq} queries, {ms} modules:')
+ for q in qs:
+ print(' ' + q.decode('utf-8'))
+
+
+if __name__ == '__main__':
+ while True:
+ cmd = input("r for read, w for write: ")
+ if cmd.lower().startswith('r'):
+ read()
+ break
+ elif cmd.lower().startswith('w'):
+ write()
+ break
+ elif cmd.lower().startswith('q'):
+ break
+
\ No newline at end of file
diff --git a/reconstruct/ast.py b/reconstruct/ast.py
index 870df5b..a260ccb 100644
--- a/reconstruct/ast.py
+++ b/reconstruct/ast.py
@@ -4,8 +4,8 @@ from enum import Enum, auto
from typing import Dict, List, Optional, Set, Tuple, Union
from engine.types import *
-from engine.utils import (base62alp, base62uuid, enlist, get_innermost,
- get_legal_name)
+from engine.utils import (base62alp, base62uuid, enlist,
+ get_innermost, get_legal_name)
from reconstruct.storage import ColRef, Context, TableInfo
class ast_node:
@@ -339,8 +339,8 @@ class projection(ast_node):
return ', '.join([self.pyname2cname[n.name] for n in lst_names])
else:
return self.pyname2cname[proj_name]
-
- for key, val in proj_map.items():
+ gb_tovec = [False] * len(proj_map)
+ for i, (key, val) in enumerate(proj_map.items()):
if type(val[1]) is str:
x = True
y = get_proj_name
@@ -357,22 +357,27 @@ class projection(ast_node):
out_typenames[key] = decltypestring
else:
out_typenames[key] = val[0].cname
- if (type(val[2].udf_called) is udf and # should bulkret also be colref?
+ elemental_ret_udf = (
+ type(val[2].udf_called) is udf and # should bulkret also be colref?
val[2].udf_called.return_pattern == udf.ReturnPattern.elemental_return
- or
- self.group_node and
- (self.group_node.use_sp_gb and
+ )
+ folding_vector_groups = (
+ self.group_node and
+ (
+ self.group_node.use_sp_gb and
val[2].cols_mentioned.intersection(
self.datasource.all_cols().difference(
self.datasource.get_joint_cols(self.group_node.refs)
- ))
- ) and val[2].is_compound # compound val not in key
- # or
- # val[2].is_compound > 1
- # (not self.group_node and val[2].is_compound)
- ):
- out_typenames[key] = f'vector_type<{out_typenames[key]}>'
- self.out_table.columns[key].compound = True
+ )
+ )
+ ) and
+ val[2].is_compound # compound val not in key
+ )
+ if (elemental_ret_udf or folding_vector_groups):
+ out_typenames[key] = f'vector_type<{out_typenames[key]}>'
+ self.out_table.columns[key].compound = True
+ if self.group_node is not None and self.group_node.use_sp_gb:
+ gb_tovec[i] = True
outtable_col_nameslist = ', '.join([f'"{c.name}"' for c in self.out_table.columns])
self.outtable_col_names = 'names_' + base62uuid(4)
self.context.emitc(f'const char* {self.outtable_col_names}[] = {{{outtable_col_nameslist}}};')
@@ -384,12 +389,14 @@ class projection(ast_node):
gb_vartable : Dict[str, Union[str, int]] = deepcopy(self.pyname2cname)
gb_cexprs : List[str] = []
gb_colnames : List[str] = []
+ gb_types : List[Types] = []
for key, val in proj_map.items():
col_name = 'col_' + base62uuid(6)
self.context.emitc(f'decltype(auto) {col_name} = {self.out_table.contextname_cpp}->get_col<{key}>();')
gb_cexprs.append((col_name, val[2]))
gb_colnames.append(col_name)
- self.group_node.finalize(gb_cexprs, gb_vartable, gb_colnames)
+ gb_types.append(val[0])
+ self.group_node.finalize(gb_cexprs, gb_vartable, gb_colnames, gb_types, gb_tovec)
else:
for i, (key, val) in enumerate(proj_map.items()):
if type(val[1]) is int:
@@ -533,6 +540,7 @@ class groupby_c(ast_node):
def init(self, node : List[Tuple[expr, Set[ColRef]]]):
self.proj : projection = self.parent
self.glist : List[Tuple[expr, Set[ColRef]]] = node
+ self.vecs : str = 'vecs_' + base62uuid(3)
return super().init(node)
def produce(self, node : List[Tuple[expr, Set[ColRef]]]):
@@ -561,21 +569,22 @@ class groupby_c(ast_node):
e = g_str
g_contents_list.append(e)
first_col = g_contents_list[0]
+ self.total_sz = 'len_' + base62uuid(4)
+ self.context.emitc(f'uint32_t {self.total_sz} = {first_col}.size;')
g_contents_decltype = [f'decays' for c in g_contents_list]
g_contents = ', '.join(
[f'{c}[{scanner_itname}]' for c in g_contents_list]
)
self.context.emitc(f'typedef record<{",".join(g_contents_decltype)}> {self.group_type};')
- self.context.emitc(f'ankerl::unordered_dense::map<{self.group_type}, vector_type, '
- f'transTypes<{self.group_type}, hasher>> {self.group};')
- self.context.emitc(f'{self.group}.reserve({first_col}.size);')
+ self.context.emitc(f'AQHashTable<{self.group_type}, '
+ f'transTypes<{self.group_type}, hasher>> {self.group} {{{self.total_sz}}};')
self.n_grps = len(self.glist)
- self.scanner = scan(self, first_col + '.size', it_name=scanner_itname)
- self.scanner.add(f'{self.group}[forward_as_tuple({g_contents})].emplace_back({self.scanner.it_var});')
+ self.scanner = scan(self, self.total_sz, it_name=scanner_itname)
+ self.scanner.add(f'{self.group}.hashtable_push(forward_as_tuple({g_contents}), {self.scanner.it_var});')
def consume(self, _):
self.scanner.finalize()
-
+ self.context.emitc(f'auto {self.vecs} = {self.group}.ht_postproc({self.total_sz});')
# def deal_with_assumptions(self, assumption:assumption, out:TableInfo):
# gscanner = scan(self, self.group)
# val_var = 'val_'+base62uuid(7)
@@ -583,16 +592,42 @@ class groupby_c(ast_node):
# gscanner.add(f'{self.datasource.cxt_name}->order_by<{assumption.result()}>(&{val_var});')
# gscanner.finalize()
- def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]], col_names : List[str]):
- for c in col_names:
+ def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]],
+ col_names : List[str], col_types : List[Types], col_tovec : List[bool]):
+ tovec_columns = set()
+ for i, c in enumerate(col_names):
self.context.emitc(f'{c}.reserve({self.group}.size());')
-
- gscanner = scan(self, self.group, loop_style = 'for_each')
+ if col_tovec[i]: # and type(col_types[i]) is VectorT:
+ typename : Types = col_types[i] # .inner_type
+ self.context.emitc(f'auto buf_{c} = static_cast<{typename.cname} *>(calloc({self.total_sz}, sizeof({typename.cname})));')
+ tovec_columns.add(c)
+ self.arr_len = 'arrlen_' + base62uuid(3)
+ self.arr_values = 'arrvals_' + base62uuid(3)
+
+ self.context.emitc(f'auto {self.arr_len} = {self.group}.size();')
+ self.context.emitc(f'auto {self.arr_values} = {self.group}.values();')
+
+ if len(tovec_columns):
+ preproc_scanner = scan(self, self.arr_len)
+ preproc_scanner_it = preproc_scanner.it_var
+ for c in tovec_columns:
+ preproc_scanner.add(f'{c}[{preproc_scanner_it}].init_from'
+ f'({self.vecs}[{preproc_scanner_it}].size,'
+ f' {"buf_" + c} + {self.group}.ht_base'
+ f'[{preproc_scanner_it}]);'
+ )
+ preproc_scanner.finalize()
+
+ self.context.emitc(f'GC::scratch_space = GC::gc_handle ? &(GC::gc_handle->scratch) : nullptr;')
+ # gscanner = scan(self, self.group, loop_style = 'for_each')
+ gscanner = scan(self, self.arr_len)
key_var = 'key_'+base62uuid(7)
val_var = 'val_'+base62uuid(7)
- gscanner.add(f'auto &{key_var} = {gscanner.it_var}.first;', position = 'front')
- gscanner.add(f'auto &{val_var} = {gscanner.it_var}.second;', position = 'front')
+ # gscanner.add(f'auto &{key_var} = {gscanner.it_var}.first;', position = 'front')
+ # gscanner.add(f'auto &{val_var} = {gscanner.it_var}.second;', position = 'front')
+ gscanner.add(f'auto &{key_var} = {self.arr_values}[{gscanner.it_var}];', position = 'front')
+ gscanner.add(f'auto &{val_var} = {self.vecs}[{gscanner.it_var}];', position = 'front')
len_var = None
def define_len_var():
nonlocal len_var
@@ -627,7 +662,7 @@ class groupby_c(ast_node):
materialize_builtin = materialize_builtin,
count=lambda:f'{val_var}.size')
- for ce in cexprs:
+ for i, ce in enumerate(cexprs):
ex = ce[1]
materialize_builtin = {}
if type(ex.udf_called) is udf:
@@ -640,9 +675,18 @@ class groupby_c(ast_node):
materialize_builtin['_builtin_ret'] = f'{ce[0]}.back()'
gscanner.add(f'{ex.eval(c_code = True, y=get_var_names, materialize_builtin = materialize_builtin)};\n')
continue
- gscanner.add(f'{ce[0]}.emplace_back({get_var_names_ex(ex)});\n')
+ if col_tovec[i]:
+ if ex.remake_binary(f'{ce[0]}[{gscanner.it_var}]'):
+ gscanner.add(f'{get_var_names_ex(ex)};\n')
+ else:
+ gscanner.add(f'{ce[0]}[{gscanner.it_var}] = {get_var_names_ex(ex)};\n')
+ else:
+ gscanner.add(f'{ce[0]}.emplace_back({get_var_names_ex(ex)});\n')
+
+ gscanner.add(f'GC::scratch_space->release();')
gscanner.finalize()
+ self.context.emitc(f'GC::scratch_space = nullptr;')
self.datasource.groupinfo = None
@@ -718,10 +762,11 @@ class groupby(ast_node):
# self.parent.var_table.
self.parent.col_ext.update(l[1])
- def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]], col_names : List[str]):
+ def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]],
+ col_names : List[str], col_types : List[Types], col_tovec : List[bool]):
if self.use_sp_gb:
self.dedicated_gb = groupby_c(self.parent, self.dedicated_glist)
- self.dedicated_gb.finalize(cexprs, var_table, col_names)
+ self.dedicated_gb.finalize(cexprs, var_table, col_names, col_types, col_tovec)
class join(ast_node):
@@ -1300,7 +1345,7 @@ class outfile(ast_node):
filename = self.node['loc']['literal'] if 'loc' in self.node else self.node['literal']
sep = ',' if 'term' not in self.node else self.node['term']['literal']
file_pointer = 'fp_' + base62uuid(6)
- self.addc(f'FILE* {file_pointer} = fopen("{filename}", "w");')
+ self.addc(f'FILE* {file_pointer} = fopen("{filename}", "wb");')
self.addc(f'{self.parent.out_table.contextname_cpp}->printall("{sep}", "\\n", nullptr, {file_pointer});')
self.addc(f'fclose({file_pointer});')
self.context.ccode += self.ccode
diff --git a/reconstruct/expr.py b/reconstruct/expr.py
index af1f0cb..135a21e 100644
--- a/reconstruct/expr.py
+++ b/reconstruct/expr.py
@@ -367,6 +367,19 @@ class expr(ast_node):
self.curr_code += c.codegen(delegate)
return self.curr_code
+ def remake_binary(self, ret_expr):
+ if self.root:
+ self.oldsql = self.sql
+ if (self.opname in builtin_binary_ops):
+ patched_opname = 'aqop_' + self.opname
+ self.sql = (f'{patched_opname}({self.children[0].sql}, '
+ f'{self.children[1].sql}, {ret_expr})')
+ return True
+ elif self.opname in builtin_vecfunc:
+ self.sql = self.sql[:self.sql.rindex(')')]
+ self.sql += ', ' + ret_expr + ')'
+ return True
+ return False
def __str__(self):
return self.sql
def __repr__(self):
diff --git a/server/aggregations.h b/server/aggregations.h
index ccd5c25..5b67e03 100644
--- a/server/aggregations.h
+++ b/server/aggregations.h
@@ -1,5 +1,6 @@
#pragma once
#include "types.h"
+#include "gc.h"
#include
#include
#include
@@ -12,7 +13,7 @@ size_t count(const VT& v) {
}
template
-constexpr static inline size_t count(const T&) { return 1; }
+constexpr static size_t count(const T&) { return 1; }
// TODO: Specializations for dt/str/none
template class VT>
@@ -29,14 +30,19 @@ double avg(const VT& v) {
return (sum(v) / static_cast(v.size));
}
+template class VT, class Ret>
+void sqrt(const VT& v, Ret& ret) {
+ for (uint32_t i = 0; i < v.size; ++i)
+ ret[i] = sqrt(v[i]);
+}
+
template class VT>
VT sqrt(const VT& v) {
VT ret(v.size);
- for (uint32_t i = 0; i < v.size; ++i) {
- ret[i] = sqrt(v[i]);
- }
+ sqrt(v, ret);
return ret;
}
+
template
T truncate(const T& v, const uint32_t precision) {
auto multiplier = pow(10, precision);
@@ -73,109 +79,153 @@ T min(const VT& v) {
min_v = min_v < _v ? min_v : _v;
return min_v;
}
-template class VT>
-decayed_t mins(const VT& arr) {
+
+// simplify this using a template std::binary_function = std::less;
+template class VT, class Ret>
+void mins(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- std::deque> cache;
- decayed_t ret(len);
T min = std::numeric_limits::max();
for (int i = 0; i < len; ++i) {
if (arr[i] < min)
min = arr[i];
ret[i] = min;
}
- return ret;
}
+
template class VT>
-decayed_t maxs(const VT& arr) {
+decayed_t mins(const VT& arr) {
+ decayed_t ret(arr.size);
+ mins(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void maxs(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
T max = std::numeric_limits::min();
for (int i = 0; i < len; ++i) {
if (arr[i] > max)
max = arr[i];
ret[i] = max;
}
- return ret;
}
template class VT>
-decayed_t minw(uint32_t w, const VT& arr) {
+decayed_t maxs(const VT& arr) {
+ decayed_t ret(arr.size);
+ maxs(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void minw(uint32_t w, const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
std::deque> cache;
for (int i = 0; i < len; ++i) {
if (!cache.empty() && cache.front().second == i - w) cache.pop_front();
+
while (!cache.empty() && cache.back().first > arr[i]) cache.pop_back();
cache.push_back({ arr[i], i });
ret[i] = cache.front().first;
}
- return ret;
}
template class VT>
-decayed_t maxw(uint32_t w, const VT& arr) {
+decayed_t minw(uint32_t w, const VT& arr) {
+ decayed_t ret(arr.size);
+ minw(w, arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void maxw(uint32_t w, const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
std::deque> cache;
for (int i = 0; i < len; ++i) {
if (!cache.empty() && cache.front().second == i - w) cache.pop_front();
- while (!cache.empty() && cache.back().first > arr[i]) cache.pop_back();
+ while (!cache.empty() && cache.back().first < arr[i]) cache.pop_back();
cache.push_back({ arr[i], i });
- arr[i] = cache.front().first;
+ ret[i] = cache.front().first;
}
- return ret;
}
template class VT>
-decayed_t> ratiow(uint32_t w, const VT& arr) {
+inline decayed_t maxw(uint32_t w, const VT& arr) {
+ decayed_t ret(arr.size);
+ maxw(w, arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void ratiow(uint32_t w, const VT& arr, Ret& ret) {
typedef std::decay_t> FPType;
uint32_t len = arr.size;
if (arr.size <= w)
len = 1;
w = w > len ? len : w;
- decayed_t ret(arr.size);
ret[0] = 0;
for (uint32_t i = 0; i < w; ++i)
ret[i] = arr[i] / (FPType)arr[0];
for (uint32_t i = w; i < arr.size; ++i)
ret[i] = arr[i] / (FPType) arr[i - w];
+}
+
+template class VT>
+inline decayed_t> ratiow(uint32_t w, const VT& arr) {
+ typedef std::decay_t> FPType;
+ decayed_t ret(arr.size);
+ ratiow(w, arr, ret);
return ret;
}
template class VT>
-decayed_t> ratios(const VT& arr) {
+inline decayed_t> ratios(const VT& arr) {
return ratiow(1, arr);
}
-template class VT>
-decayed_t> sums(const VT& arr) {
+template class VT, class Ret>
+inline void ratios(const VT& arr, Ret& ret) {
+ return ratiow(1, arr, ret);
+}
+
+template class VT, class Ret>
+void sums(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t> ret(len);
uint32_t i = 0;
if (len) ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = ret[i - 1] + arr[i];
- return ret;
}
template class VT>
-decayed_t>> avgs(const VT& arr) {
+inline decayed_t> sums(const VT& arr) {
+ decayed_t> ret(arr.size);
+ sums(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void avgs(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
typedef types::GetFPType> FPType;
- decayed_t ret(len);
uint32_t i = 0;
types::GetLongType s;
if (len) s = ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = (s += arr[i]) / (FPType)(i + 1);
- return ret;
}
template class VT>
-decayed_t> sumw(uint32_t w, const VT& arr) {
+inline decayed_t>> avgs(const VT& arr) {
+ typedef types::GetFPType> FPType;
+ decayed_t ret(arr.size);
+ avgs(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void sumw(uint32_t w, const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t> ret(len);
uint32_t i = 0;
w = w > len ? len : w;
if (len) ret[i++] = arr[0];
@@ -183,11 +233,17 @@ decayed_t> sumw(uint32_t w, const VT& arr) {
ret[i] = ret[i - 1] + arr[i];
for (; i < len; ++i)
ret[i] = ret[i - 1] + arr[i] - arr[i - w];
- return ret;
}
template class VT>
-void avgw(uint32_t w, const VT& arr, decayed_t>>& ret) {
+decayed_t> sumw(uint32_t w, const VT& arr) {
+ decayed_t> ret(arr.size);
+ sumw(w, arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void avgw(uint32_t w, const VT& arr, Ret& ret) {
typedef types::GetFPType> FPType;
const uint32_t& len = arr.size;
uint32_t i = 0;
@@ -201,26 +257,19 @@ void avgw(uint32_t w, const VT& arr, decayed_t class VT>
-decayed_t>> avgw(uint32_t w, const VT& arr) {
+inline decayed_t>> avgw(uint32_t w, const VT& arr) {
typedef types::GetFPType> FPType;
const uint32_t& len = arr.size;
decayed_t ret(len);
- uint32_t i = 0;
- types::GetLongType s{};
- w = w > len ? len : w;
- if (len) s = ret[i++] = arr[0];
- for (; i < w; ++i)
- ret[i] = (s += arr[i]) / (FPType)(i + 1);
- for (; i < len; ++i)
- ret[i] = ret[i - 1] + (arr[i] - arr[i - w]) / (FPType)w;
+ avgw(w, arr, ret);
return ret;
}
-template class VT, bool sd = false>
-decayed_t>> varw(uint32_t w, const VT& arr) {
+template class VT, class Ret, bool sd = false>
+void varw(uint32_t w, const VT& arr,
+ Ret& ret) {
using FPType = types::GetFPType>;
const uint32_t& len = arr.size;
- decayed_t ret(len);
uint32_t i = 0;
types::GetLongType s{};
w = w > len ? len : w;
@@ -252,7 +301,14 @@ decayed_t>> varw(uint32_t w, const VT
if constexpr(sd)
if(i)
ret[i-1] = sqrt(ret[i-1]);
-
+}
+
+
+template class VT, bool sd = false>
+inline decayed_t>> varw(uint32_t w, const VT& arr) {
+ using FPType = types::GetFPType>;
+ decayed_t ret(arr.size);
+ varw>>, sd>(w, arr, ret);
return ret;
}
@@ -274,11 +330,10 @@ types::GetFPType>> var(const VT& arr) {
return (ssq - s * s / (FPType)(len + 1)) / (FPType)(len + 1);
}
-template class VT, bool sd = false>
-decayed_t>> vars(const VT& arr) {
+template class VT, class Ret, bool sd = false>
+void vars(const VT& arr, Ret& ret) {
typedef types::GetFPType> FPType;
const uint32_t& len = arr.size;
- decayed_t ret(len);
uint32_t i = 0;
types::GetLongType s{};
FPType MnX{};
@@ -298,70 +353,103 @@ decayed_t>> vars(const VT& arr) {
ret[i] = MnX / (FPType)(i + 1);
if constexpr(sd) ret[i] = sqrt(ret[i]);
}
+}
+
+template class VT, bool sd = false>
+inline decayed_t>> vars(const VT& arr) {
+ typedef types::GetFPType> FPType;
+ decayed_t ret(arr.size);
+ vars>>, sd>(arr, ret);
return ret;
}
+
template class VT>
-types::GetFPType>> stddev(const VT& arr) {
+inline types::GetFPType>> stddev(const VT& arr) {
return sqrt(var(arr));
}
+
template class VT>
-decayed_t>> stddevs(const VT& arr) {
+inline decayed_t>> stddevs(const VT& arr) {
return vars(arr);
}
+
template class VT>
-decayed_t>> stddevw(uint32_t w, const VT& arr) {
+inline decayed_t>> stddevw(uint32_t w, const VT& arr) {
return varw(w, arr);
}
+
+template class VT, class Ret>
+inline auto stddevs(const VT& arr, Ret& ret) {
+ return vars(arr, ret);
+}
+
+template class VT, class Ret>
+inline auto stddevw(uint32_t w, const VT& arr, Ret& ret) {
+ return varw(w, arr, ret);
+}
+
+
// use getSignedType
-template class VT>
-decayed_t deltas(const VT& arr) {
+template class VT, class Ret>
+void deltas(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
uint32_t i = 0;
if (len) ret[i++] = 0;
for (; i < len; ++i)
ret[i] = arr[i] - arr[i - 1];
- return ret;
}
template class VT>
-decayed_t prev(const VT& arr) {
+inline decayed_t deltas(const VT& arr) {
+ decayed_t ret(arr.size);
+ deltas(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void prev(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
uint32_t i = 0;
if (len) ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = arr[i - 1];
- return ret;
}
template class VT>
-decayed_t aggnext(const VT& arr) {
+inline decayed_t prev(const VT& arr) {
+ decayed_t ret(arr.size);
+ prev(arr, ret);
+ return ret;
+}
+
+template class VT, class Ret>
+void aggnext(const VT& arr, Ret& ret) {
const uint32_t& len = arr.size;
- decayed_t ret(len);
uint32_t i = 1;
for (; i < len; ++i)
ret[i - 1] = arr[i];
if (len > 0) ret[len - 1] = arr[len - 1];
+}
+
+template class VT>
+inline decayed_t aggnext(const VT& arr) {
+ decayed_t ret(arr.size);
+ aggnext(arr, ret);
return ret;
}
template class VT>
T last(const VT& arr) {
if (!arr.size) return 0;
- const uint32_t& len = arr.size;
return arr[arr.size - 1];
}
template class VT>
T first(const VT& arr) {
if (!arr.size) return 0;
- const uint32_t& len = arr.size;
return arr[0];
}
-
-
#define __DEFAULT_AGGREGATE_FUNCTION__(NAME, RET) \
template constexpr T NAME(const T& v) { return RET; }
diff --git a/server/gc.h b/server/gc.h
index 1099248..d8e84f8 100644
--- a/server/gc.h
+++ b/server/gc.h
@@ -1,9 +1,43 @@
#pragma once
-#ifndef __AQ_USE_THREADEDGC__
#include
-class GC {
-private:;
+class ScratchSpace {
+public:
+ void* ret;
+ char* scratchspace;
+ size_t ptr;
+ size_t cnt;
+ size_t capacity;
+ size_t initial_capacity;
+ void* temp_memory_fractions;
+
+ //uint8_t status;
+ // record maximum size
+ constexpr static uint8_t Grow = 0x1;
+ // no worry about overflow
+ constexpr static uint8_t Use = 0x0;
+
+ void init(size_t initial_capacity);
+
+ // apply for memory
+ void* alloc(uint32_t sz);
+
+ void register_ret(void* ret);
+
+ // reorganize memory space
+ void release();
+
+ // reset status of the scratch space
+ void reset();
+
+ // reset scratch space to initial capacity.
+ void cleanup();
+};
+
+
+#ifndef __AQ_USE_THREADEDGC__
+class GC {
+private:
size_t max_slots,
interval, forced_clean,
forceclean_timer = 0;
@@ -18,7 +52,6 @@ private:;
std::atomic current_size;
volatile bool lock;
using gc_deallocator_t = void (*)(void*);
-
// maybe use volatile std::thread::id instead
protected:
void acquire_lock();
@@ -29,28 +62,38 @@ protected:
void terminate_daemon();
public:
- void reg(void* v, uint32_t sz = 1,
+ ScratchSpace scratch;
+ void reg(void* v, uint32_t sz = 0xffffffff,
void(*f)(void*) = free
);
+ uint32_t get_threshold() const {
+ return threshould;
+ }
+
GC(
uint64_t max_size = 0xfffffff, uint32_t max_slots = 4096,
uint32_t interval = 10000, uint32_t forced_clean = 1000000,
- uint32_t threshould = 64 //one seconds
+ uint32_t threshould = 64, //one seconds
+ uint32_t scratch_sz = 0x1000000 // 16 MB
) : max_size(max_size), max_slots(max_slots),
interval(interval), forced_clean(forced_clean),
threshould(threshould) {
start_deamon();
GC::gc_handle = this;
+ this->scratch.init(1);
} // 256 MB
~GC(){
terminate_daemon();
+ scratch.cleanup();
}
+
static GC* gc_handle;
+ static ScratchSpace *scratch_space;
template
- constexpr static inline gc_deallocator_t _delete(T*){
+ static inline gc_deallocator_t _delete(T*) {
return [](void* v){
delete (T*)v;
};
diff --git a/server/hasher.h b/server/hasher.h
index b632319..22a98e2 100644
--- a/server/hasher.h
+++ b/server/hasher.h
@@ -132,7 +132,3 @@ namespace ankerl::unordered_dense{
struct hash> : public hasher{ };
}
-struct aq_hashtable_value_t {
- uint32_t id;
- uint32_t cnt;
-};
\ No newline at end of file
diff --git a/server/io.h b/server/io.h
index e8ff02a..3c3bf68 100644
--- a/server/io.h
+++ b/server/io.h
@@ -4,6 +4,7 @@
#include
#include
#include
+#include
template
std::string generate_printf_string(const char* sep = " ", const char* end = "\n") {
std::string str;
@@ -25,6 +26,11 @@ inline decltype(auto) print_hook(const bool& v) {
return v? "true" : "false";
}
+template<>
+inline decltype(auto) print_hook(const std::string_view& v) {
+ return v.data();
+}
+
extern char* gbuf;
void setgbuf(char* buf = 0);
diff --git a/server/libaquery.cpp b/server/libaquery.cpp
index 9521ce9..60e4b81 100644
--- a/server/libaquery.cpp
+++ b/server/libaquery.cpp
@@ -55,6 +55,7 @@ void print(const bool&v, const char* delimiter){
std::cout<< (v?"true":"false") << delimiter;
}
+
template
T getInt(const char*& buf){
T ret = 0;
@@ -451,6 +452,9 @@ void GC::reg(void* v, uint32_t sz, void(*f)(void*)) { //~ 40ns expected v. free
f(v);
return;
}
+ else if (sz == 0xffffffff)
+ sz = this->threshould;
+
auto _q = static_cast(q);
while(lock);
++alive_cnt;
@@ -464,6 +468,72 @@ void GC::reg(void* v, uint32_t sz, void(*f)(void*)) { //~ 40ns expected v. free
#endif
inline GC* GC::gc_handle = nullptr;
+inline ScratchSpace* GC::scratch_space = nullptr;
+
+void ScratchSpace::init(size_t initial_capacity) {
+ ret = nullptr;
+ scratchspace = static_cast(malloc(initial_capacity));
+ ptr = cnt = 0;
+ capacity = initial_capacity;
+ this->initial_capacity = initial_capacity;
+ temp_memory_fractions = new vector_type();
+}
+
+inline void* ScratchSpace::alloc(uint32_t sz){
+ ptr = this->cnt;
+ this->cnt += sz; // major cost
+ if (this->cnt > capacity) {
+ [[unlikely]]
+ capacity = this->cnt + (capacity >> 1);
+ auto vec_tmpmem_fractions = static_cast*>(temp_memory_fractions);
+ vec_tmpmem_fractions->emplace_back(scratchspace);
+ scratchspace = static_cast(malloc(capacity));
+ ptr = 0;
+ }
+ return scratchspace + ptr;
+}
+
+inline void ScratchSpace::register_ret(void* ret){
+ this->ret = ret;
+}
+
+inline void ScratchSpace::release(){
+ ptr = cnt = 0;
+ auto vec_tmpmem_fractions =
+ static_cast*>(temp_memory_fractions);
+ if (vec_tmpmem_fractions->size) {
+ [[unlikely]]
+ for(auto& mem : *vec_tmpmem_fractions){
+ //free(mem);
+ GC::gc_handle->reg(mem);
+ }
+ vec_tmpmem_fractions->clear();
+ }
+}
+
+inline void ScratchSpace::reset() {
+ this->release();
+ ret = nullptr;
+ if (capacity != initial_capacity){
+ capacity = initial_capacity;
+ scratchspace = static_cast(realloc(scratchspace, capacity));
+ }
+}
+
+void ScratchSpace::cleanup(){
+ auto vec_tmpmem_fractions =
+ static_cast*>(temp_memory_fractions);
+ if (vec_tmpmem_fractions->size) {
+ for(auto& mem : *vec_tmpmem_fractions){
+ free(mem);
+ //GC::gc_handle->reg(mem);
+ }
+ vec_tmpmem_fractions->clear();
+ }
+ delete vec_tmpmem_fractions;
+ free(this->scratchspace);
+}
+
#include "dragonbox/dragonbox_to_chars.hpp"
@@ -537,4 +607,11 @@ aq_to_chars(void* value, char* buffer) {
return buffer;
}
+template<>
+char*
+aq_to_chars(void* value, char* buffer){
+ const auto& src = *static_cast(value);
+ memcpy(buffer, src.data(), src.size());
+ return buffer + src.size();
+}
diff --git a/server/libaquery.h b/server/libaquery.h
index e5b9516..7635310 100644
--- a/server/libaquery.h
+++ b/server/libaquery.h
@@ -161,6 +161,7 @@ template<> char* aq_to_chars(void* , char*);
template<> char* aq_to_chars(void* , char*);
template<> char* aq_to_chars