bill 2 years ago
commit da5b5065e6

@ -4,10 +4,10 @@ MonetDB_INC =
Defines =
CXXFLAGS = --std=c++2a
ifeq ($(AQ_DEBUG), 1)
OPTFLAGS = -g3 -fsanitize=address -fsanitize=leak
OPTFLAGS = -g3 #-fsanitize=address
LINKFLAGS =
else
OPTFLAGS = -O3 -DNDEBUG -fno-stack-protector
OPTFLAGS = -Ofast -DNDEBUG -fno-stack-protector
LINKFLAGS = -flto -s
endif
SHAREDFLAGS = -shared

@ -1,31 +1,20 @@
# AQuery++ Database
### Please try the latest code in dev branch if you encounter any problem. Use `git checkout dev` to switch branches.
## Introduction
AQuery++ Database is a cross-platform, In-Memory Column-Store Database that incorporates compiled query execution. (**Note**: If you encounter any problems, feel free to contact me via ys3540@nyu.edu)
# Architecture
![Architecture](./docs/arch-hybrid.svg)
## Docker (Recommended):
- See installation instructions from [docker.com](https://www.docker.com). Run **docker desktop** to start docker engine.
- In AQuery root directory, type `make docker` to build the docker image from scratch.
- For Arm-based Mac users, you would have to build and run the **x86_64** docker image because MonetDB doesn't offer official binaries for arm64 Linux. (Run `docker buildx build --platform=linux/amd64 -t aquery .` instead of `make docker`)
- Finally run the image in **interactive** mode (`docker run --name aquery -it aquery`)
- When you need to access the container again run `docker start -ai aquery`
- If there is a need to access the system shell within AQuery, type `dbg` to activate python interpreter and type `os.system('sh')` to launch a shell.
- Docker image is available on [Docker Hub](https://hub.docker.com/repository/docker/sunyinqi0508/aquery) but building image yourself is highly recommended (see [#2](../../issues/2))
## CIMS Computer Lab (Only for NYU affiliates who have access)
1. Clone this git repo in CIMS.
2. Download the [patch](https://drive.google.com/file/d/1YkykhM6u0acZ-btQb4EUn4jAEXPT81cN/view?usp=sharing)
3. Decompress the patch to any directory and execute script inside by typing (`source ./cims.sh`). Please use the source command or `. ./cims.sh` (dot space) to execute the script because it contains configurations for environment variables. Also note that this script can only work with bash and compatible shells (e.g. dash, zsh. but not csh)
4. Execute `python3 ./prompt.py`
## AQuery Compiler
- The query is first processed by the AQuery Compiler which is composed of a frontend that parses the query into AST and a backend that generates target code that delivers the query.
- Front end of AQuery++ Compiler is built on top of [mo-sql-parsing](https://github.com/klahnakoski/mo-sql-parsing) with modifications to handle AQuery dialect and extension.
- Backend of AQuery++ Compiler generates target code dependent on the Execution Engine. It can either be the C++ code for AQuery Execution Engine or sql and C++ post-processor for Hybrid Engine or k9 for the k9 Engine.
## Execution Engines
- AQuery++ supports different execution engines thanks to the decoupled compiler structure.
- Hybrid Execution Engine: decouples the query into two parts. The sql-compliant part is executed by an Embedded version of Monetdb and everything else is executed by a post-process module which is generated by AQuery++ Compiler in C++ and then compiled and executed.
- AQuery Library: A set of header based libraries that provide column arithmetic and operations inspired by array programming languages like kdb. This library is used by C++ post-processor code which can significantly reduce the complexity of generated code, reducing compile time while maintaining the best performance. The set of libraries can also be used by UDFs as well as User modules which makes it easier for users to write simple, efficient yet powerful extensions.
## Singularity Container
1. build container `singularity build aquery.sif aquery.def`
2. execute container `singularity exec aquery.sif sh`
3. run AQuery `python3 ./prompt.py`
# Native Installation:
# Installation:
## Requirements
1. Recent version of Linux, Windows or MacOS, with recent C++ compiler that has C++17 (1z) support. (however c++20 is recommended if available for heterogeneous lookup on unordered containers)
- GCC: 9.0 or above (g++ 7.x, 8.x fail to handle fold-expressions due to a compiler bug)
@ -38,10 +27,6 @@ AQuery++ Database is a cross-platform, In-Memory Column-Store Database that inco
- On MacOS, Monetdb can be easily installed in homebrew `brew install monetdb`.
3. Python 3.6 or above and install required packages in requirements.txt by `python3 -m pip install -r requirements.txt`
## Installation
AQuery is tested on mainstream operating systems such as Windows, macOS and Linux
### Windows
There're multiple options to run AQuery on Windows. But for better consistency I recommend using a simulated Linux environment such as **Windows Subsystem for Linux** (1 or 2), **Docker** or **Linux Virtual Machines**. You can also use the native toolchain from Microsoft Visual Studio or gcc from Winlabs/Cygwin/MinGW.
@ -97,7 +82,24 @@ There're multiple options to run AQuery on Windows. But for better consistency I
In this case, upgrade anaconda or your compiler or use the python from your OS or package manager instead. Or (**NOT recommended**) copy/link the library from your system (e.g. /usr/lib/x86_64-linux-gnu/libstdc++.so.6) to anaconda's library directory (e.g. ~/Anaconda3/lib/).
## Docker (Recommended):
- See installation instructions from [docker.com](https://www.docker.com). Run **docker desktop** to start docker engine.
- In AQuery root directory, type `make docker` to build the docker image from scratch.
- For Arm-based Mac users, you would have to build and run the **x86_64** docker image because MonetDB doesn't offer official binaries for arm64 Linux. (Run `docker buildx build --platform=linux/amd64 -t aquery .` instead of `make docker`)
- Finally run the image in **interactive** mode (`docker run --name aquery -it aquery`)
- When you need to access the container again run `docker start -ai aquery`
- If there is a need to access the system shell within AQuery, type `dbg` to activate python interpreter and type `os.system('sh')` to launch a shell.
- Docker image is available on [Docker Hub](https://hub.docker.com/repository/docker/sunyinqi0508/aquery) but building image yourself is highly recommended (see [#2](../../issues/2))
## CIMS Computer Lab (Only for NYU affiliates who have access)
1. Clone this git repo in CIMS.
2. Download the [patch](https://drive.google.com/file/d/1YkykhM6u0acZ-btQb4EUn4jAEXPT81cN/view?usp=sharing)
3. Decompress the patch to any directory and execute script inside by typing (`source ./cims.sh`). Please use the source command or `. ./cims.sh` (dot space) to execute the script because it contains configurations for environment variables. Also note that this script can only work with bash and compatible shells (e.g. dash, zsh. but not csh)
4. Execute `python3 ./prompt.py`
## Singularity Container
1. build container `singularity build aquery.sif aquery.def`
2. execute container `singularity exec aquery.sif sh`
3. run AQuery `python3 ./prompt.py`
# Usage
`python3 prompt.py` will launch the interactive command prompt. The server binary will be automatically rebuilt and started.
### Commands:
@ -268,17 +270,6 @@ SELECT * FROM my_table WHERE c1 > 10
- `sqrt(x), trunc(x), and other builtin math functions`: value-wise math operations. `sqrt(x)[i] = sqrt(x[i])`
- `pack(cols, ...)`: pack multiple columns with exact same type into a single column.
# Architecture
![Architecture](./docs/arch-hybrid.svg)
## AQuery Compiler
- The query is first processed by the AQuery Compiler which is composed of a frontend that parses the query into AST and a backend that generates target code that delivers the query.
- Front end of AQuery++ Compiler is built on top of [mo-sql-parsing](https://github.com/klahnakoski/mo-sql-parsing) with modifications to handle AQuery dialect and extension.
- Backend of AQuery++ Compiler generates target code dependent on the Execution Engine. It can either be the C++ code for AQuery Execution Engine or sql and C++ post-processor for Hybrid Engine or k9 for the k9 Engine.
## Execution Engines
- AQuery++ supports different execution engines thanks to the decoupled compiler structure.
- Hybrid Execution Engine: decouples the query into two parts. The sql-compliant part is executed by an Embedded version of Monetdb and everything else is executed by a post-process module which is generated by AQuery++ Compiler in C++ and then compiled and executed.
- AQuery Library: A set of header based libraries that provide column arithmetic and operations inspired by array programming languages like kdb. This library is used by C++ post-processor code which can significantly reduce the complexity of generated code, reducing compile time while maintaining the best performance. The set of libraries can also be used by UDFs as well as User modules which makes it easier for users to write simple but powerful extensions.
# Roadmap
- [x] SQL Parser -> AQuery Parser (Front End)

@ -2,7 +2,7 @@
## GLOBAL CONFIGURATION FLAGS
version_string = '0.5.4a'
version_string = '0.6.0a'
add_path_to_ldpath = True
rebuild_backend = False
run_backend = True

@ -0,0 +1,6 @@
CREATE TABLE trade01m(stocksymbol STRING, time INT, quantity INT, price INT)
load data infile "../tables/trade01m.csv" into table trade01m fields terminated by ','
CREATE TABLE trade1m(stocksymbol STRING, time INT, quantity INT, price INT)
load data infile "../tables/trade1m.csv" into table trade1m fields terminated by ','
CREATE TABLE trade10m(stocksymbol STRING, time INT, quantity INT, price INT)
load data infile "../tables/trade10m.csv" into table trade10m fields terminated by ','

@ -0,0 +1,5 @@
-- select rows
<sql>
CREATE TABLE res0 AS
SELECT * FROM trade10m
</sql>

@ -0,0 +1,7 @@
-- groupby_multi_different_functions
<sql>
CREATE TABLE res1 AS
SELECT avg(quantity) AS avg_quan, min(price) AS min_p
FROM trade1m
GROUP BY stocksymbol, time
</sql>

@ -0,0 +1,4 @@
SELECT stocksymbol, MAX(stddevs(3, price))
FROM trade1m
ASSUMING ASC time
GROUP BY stocksymbol

@ -0,0 +1,4 @@
-- count values
<sql>
SELECT COUNT(*) FROM trade10m
</sql>

@ -0,0 +1,7 @@
-- group by multiple keys
<sql>
create table res3 AS
SELECT sum(quantity) as sum_quantity
FROM trade01m
GROUP BY stocksymbol, price
</sql>

@ -0,0 +1,5 @@
-- append tables
<sql>
CREATE TABLE res4 AS
SELECT * FROM trade10m UNION ALL SELECT * FROM trade10m
</sql>

@ -0,0 +1,5 @@
CREATE table res7 AS
SELECT stocksymbol, avgs(5, price)
FROM trade10m
ASSUMING ASC time
GROUP BY stocksymbol

@ -0,0 +1,6 @@
<sql>
CREATE TABLE res8 AS
SELECT stocksymbol, quantity, price
FROM trade10m
WHERE time >= 5288 and time <= 7000
</sql>

@ -0,0 +1,6 @@
<sql>
CREATE TABLE res9 AS
SELECT stocksymbol, MAX(price) - MIN(price)
FROM trade10m
GROUP BY stocksymbol
</sql>

@ -0,0 +1,3 @@
-- q0 select rows
CREATE TABLE res0 (a String, b Int32, c Int32, d Int32) ENGINE = MergeTree() ORDER BY b AS
SELECT * FROM benchmark.trade10m

@ -0,0 +1,4 @@
-- groupby_multi_different_functions
SELECT avg(quantity), min(price)
FROM benchmark.trade10m
GROUP BY stocksymbol, time

@ -0,0 +1,8 @@
-- max rolling std
select
stocksymbol,
max(stddevPop(price)) over
(partition by stocksymbol rows between 2 preceding AND CURRENT row) as maxRollingStd
from
(SELECT * FROM benchmark.trade01m ORDER BY time)
GROUP BY stocksymbol

@ -0,0 +1,2 @@
-- count values
SELECT COUNT(*) FROM benchmark.trade10m

@ -0,0 +1,4 @@
-- group by multiple keys
SELECT sum(quantity)
FROM benchmark.trade10m
GROUP BY stocksymbol, price

@ -0,0 +1,2 @@
-- append two tables
SELECT * FROM benchmark.trade10m UNION ALL SELECT * FROM benchmark.trade10m

@ -0,0 +1,5 @@
-- moving_avg
SELECT stocksymbol, groupArrayMovingAvg(5)(price) AS moving_avg_price
FROM
(SELECT * FROM benchmark.trade01m ORDER BY time)
GROUP BY stocksymbol

@ -0,0 +1,3 @@
SELECT stocksymbol, quantity, price
FROM benchmark.trade10m
WHERE time >= 5288 and time <= 7000

@ -0,0 +1,3 @@
SELECT stocksymbol, MAX(price) - MIN(price)
FROM benchmark.trade1m
GROUP BY stocksymbol

@ -0,0 +1,3 @@
-- select rows
CREATE TABLE res0 AS
SELECT * FROM trade10m;

@ -0,0 +1,4 @@
-- groupby_multi_different_functions
SELECT avg(quantity), min(price)
FROM trade10m
GROUP BY stocksymbol, time;

@ -0,0 +1,7 @@
select
stocksymbol,
max(stddev(price)) over
(partition by stocksymbol rows between 2 preceding AND CURRENT row) as maxRollingStd
from
(SELECT * FROM trade01m ORDER BY time) as t
GROUP BY stocksymbol;

@ -0,0 +1,2 @@
-- count values
SELECT COUNT(*) FROM trade10m;

@ -0,0 +1,4 @@
-- group by multiple keys
SELECT sum(quantity)
FROM trade10m
GROUP BY stocksymbol, price;

@ -0,0 +1,2 @@
-- append tables
SELECT * FROM trade10m UNION ALL SELECT * FROM trade10m;

@ -0,0 +1,5 @@
select
stocksymbol,
coalesce(avg(price) over
(partition by stocksymbol order by time rows between 4 preceding AND CURRENT row), price) as rollingAvg
from trade10m;

@ -0,0 +1,3 @@
SELECT stocksymbol, quantity, price
FROM trade01m
WHERE time >= 5288 and time <= 7000

@ -0,0 +1,3 @@
SELECT stocksymbol, MAX(price) - MIN(price)
FROM trade01m
GROUP BY stocksymbol;

@ -117,7 +117,7 @@ class build_manager:
else:
mgr.cxx = os.environ['CXX']
if 'AQ_DEBUG' not in os.environ:
os.environ['AQ_DEBUG'] = '0' if mgr.OptimizationLv else '1'
os.environ['AQ_DEBUG'] = ('0' if mgr.OptimizationLv != '0' else '1')
def libaquery_a(self):
self.build_cmd = [['rm', 'libaquery.a'],['make', 'libaquery']]
@ -184,7 +184,7 @@ class build_manager:
def __init__(self) -> None:
self.method = 'make'
self.cxx = ''
self.OptimizationLv = '0' # [O0, O1, O2, O3, Ofast]
self.OptimizationLv = '4' # [O0, O1, O2, O3, Ofast]
self.Platform = 'amd64'
self.PCH = os.environ['PCH'] if 'PCH' in os.environ else 1
self.StaticLib = 1

@ -80,7 +80,7 @@ int gen_trade_data(int argc, char* argv[])
memmove(p + lens[i], p + lens[0], (lens[i - 1] - lens[i]) * sizeof(int));
permutation(p, lens[0] + N);
// for (int i = 0; i < lens[0] + N; ++i) printf("%d ", p[i]);
FILE* fp = fopen("trade.csv", "w");
FILE* fp = fopen("trade.csv", "wb");
int* last_price = new int[N];
memset(last_price, -1, sizeof(int) * N);
fprintf(fp, "stocksymbol, time, quantity, price\n");
@ -131,7 +131,7 @@ int gen_stock_data(int argc, char* argv[]){
}
IDs[n_stocks] = "S";
names[n_stocks] = "x";
FILE* fp = fopen("./data/stock.csv", "w");
FILE* fp = fopen("./data/stock.csv", "wb");
fprintf(fp, "ID, timestamp, tradeDate, price\n");
char date_str_buf [types::date_t::string_length()];
int* timestamps = new int[n_data];
@ -142,7 +142,7 @@ int gen_stock_data(int argc, char* argv[]){
fprintf(fp, "%s,%d,%s,%d\n", IDs[ui(engine)%(n_stocks + 1)].c_str(), timestamps[i], date, ui(engine) % 1000);
}
fclose(fp);
fp = fopen("./data/base.csv", "w");
fp = fopen("./data/base.csv", "wb");
fprintf(fp, "ID, name\n");
for(int i = 0; i < n_stocks + 1; ++ i){
fprintf(fp, "%s,%s\n", IDs[i].c_str(), names[i].c_str());

@ -110,7 +110,7 @@ class outfile(ast_node):
filename = node['loc']['literal'] if 'loc' in node else node['literal']
sep = ',' if 'term' not in node else node['term']['literal']
file_pointer = 'fp_' + base62uuid(6)
self.emit(f'FILE* {file_pointer} = fopen("{filename}", "w");')
self.emit(f'FILE* {file_pointer} = fopen("{filename}", "wb");')
self.emit(f'{out_table.cxt_name}->printall("{sep}", "\\n", nullptr, {file_pointer});')
self.emit(f'fclose({file_pointer});')
# self.context.headers.add('fstream')

@ -107,9 +107,9 @@ ULongT = Types(8, name = 'uint64', sqlname = 'UINT64', fp_type=DoubleT)
UIntT = Types(7, name = 'uint32', sqlname = 'UINT32', long_type=ULongT, fp_type=FloatT)
UShortT = Types(6, name = 'uint16', sqlname = 'UINT16', long_type=ULongT, fp_type=FloatT)
UByteT = Types(5, name = 'uint8', sqlname = 'UINT8', long_type=ULongT, fp_type=FloatT)
StrT = Types(200, name = 'str', cname = 'const char*', sqlname='TEXT', ctype_name = 'types::ASTR')
TextT = Types(200, name = 'text', cname = 'const char*', sqlname='TEXT', ctype_name = 'types::ASTR')
VarcharT = Types(200, name = 'varchar', cname = 'const char*', sqlname='VARCHAR', ctype_name = 'types::ASTR')
StrT = Types(200, name = 'str', cname = 'string_view', sqlname='TEXT', ctype_name = 'types::ASTR')
TextT = Types(200, name = 'text', cname = 'string_view', sqlname='TEXT', ctype_name = 'types::ASTR')
VarcharT = Types(200, name = 'varchar', cname = 'string_view', sqlname='VARCHAR', ctype_name = 'types::ASTR')
VoidT = Types(200, name = 'void', cname = 'void', sqlname='Null', ctype_name = 'types::None')
class VectorT(Types):
@ -305,7 +305,7 @@ opor = OperatorBase('or', 2, logical, cname = '||', sqlname = ' OR ', call = bin
opxor = OperatorBase('xor', 2, logical, cname = '^', sqlname = ' XOR ', call = binary_op_behavior)
opgt = OperatorBase('gt', 2, logical, cname = '>', sqlname = '>', call = binary_op_behavior)
oplt = OperatorBase('lt', 2, logical, cname = '<', sqlname = '<', call = binary_op_behavior)
opge = OperatorBase('gte', 2, logical, cname = '>=', sqlname = '>=', call = binary_op_behavior)
opgte = OperatorBase('gte', 2, logical, cname = '>=', sqlname = '>=', call = binary_op_behavior)
oplte = OperatorBase('lte', 2, logical, cname = '<=', sqlname = '<=', call = binary_op_behavior)
opneq = OperatorBase('neq', 2, logical, cname = '!=', sqlname = '!=', call = binary_op_behavior)
opeq = OperatorBase('eq', 2, logical, cname = '==', sqlname = '=', call = binary_op_behavior)
@ -355,19 +355,27 @@ fnpow = OperatorBase('pow', 2, lambda *_ : DoubleT, cname = 'pow', sqlname = 'PO
# type collections
def _op_make_dict(*items : OperatorBase):
return { i.name: i for i in items}
#binary op
builtin_binary_arith = _op_make_dict(opadd, opdiv, opmul, opsub, opmod)
builtin_binary_logical = _op_make_dict(opand, opor, opxor, opgt, oplt,
opge, oplte, opneq, opeq)
opgte, oplte, opneq, opeq)
builtin_binary_ops = {**builtin_binary_arith, **builtin_binary_logical}
#unary op
builtin_unary_logical = _op_make_dict(opnot)
builtin_unary_arith = _op_make_dict(opneg)
builtin_unary_special = _op_make_dict(spnull, opdistinct)
# functions
builtin_cstdlib = _op_make_dict(fnsqrt, fnlog, fnsin, fncos, fntan, fnpow)
builtin_func = _op_make_dict(fnmax, fnmin, fnsum, fnavg, fnmaxs,
fnmins, fndeltas, fnratios, fnlast,
fnfirst, fnsums, fnavgs, fncnt,
fnpack, fntrunc, fnprev, fnnext,
fnvar, fnvars, fnstd, fnstds)
builtin_aggfunc = _op_make_dict(fnmax, fnmin, fnsum, fnavg,
fnlast, fnfirst, fncnt, fnvar, fnstd)
builtin_vecfunc = _op_make_dict(fnmaxs,
fnmins, fndeltas, fnratios, fnsums, fnavgs,
fnpack, fntrunc, fnprev, fnnext, fnvars, fnstds)
builtin_vecfunc = {**builtin_vecfunc, **builtin_cstdlib}
builtin_func = {**builtin_vecfunc, **builtin_aggfunc}
user_module_func = {}
builtin_operators : Dict[str, OperatorBase] = {**builtin_binary_arith, **builtin_binary_logical,
**builtin_unary_arith, **builtin_unary_logical, **builtin_unary_special, **builtin_func, **builtin_cstdlib,
**user_module_func}

@ -157,4 +157,4 @@ def get_innermost(sl):
elif sl and type(sl) is list:
return get_innermost(sl[0])
else:
return sl
return sl

@ -5,6 +5,7 @@
#include "./server/gc.h"
__AQEXPORT__(void) __AQ_Init_GC__(Context* cxt) {
GC::gc_handle = static_cast<GC*>(cxt->gc);
GC::scratch_space = nullptr;
}
#else // __AQ_USE_THREADEDGC__

@ -0,0 +1,72 @@
#include "./server/libaquery.h"
#ifndef __AQ_USE_THREADEDGC__
#include "./server/gc.h"
__AQEXPORT__(void) __AQ_Init_GC__(Context* cxt) {
GC::gc_handle = static_cast<GC*>(cxt->gc);
}
#else // __AQ_USE_THREADEDGC__
#define __AQ_Init_GC__(x)
#endif // __AQ_USE_THREADEDGC__
#include "./server/hasher.h"
#include "./server/monetdb_conn.h"
#include "./server/aggregations.h"
__AQEXPORT__(int) dll_2Cxoox(Context* cxt) {
using namespace std;
using namespace types;
auto server = static_cast<Server*>(cxt->alt_server);
auto len_4ycjiV = server->cnt;
auto mont_8AE = ColRef<const char*>(len_4ycjiV, server->getCol(0));
auto sales_2RB = ColRef<int>(len_4ycjiV, server->getCol(1));
const char* names_6pIt[] = {"mont", "minw2ysales"};
auto out_2LuaMH = new TableInfo<const char*,vector_type<double>>("out_2LuaMH", names_6pIt);
decltype(auto) col_EeW23s = out_2LuaMH->get_col<0>();
decltype(auto) col_5gY1Dm = out_2LuaMH->get_col<1>();
typedef record<decays<decltype(mont_8AE)::value_t>> record_typegj3e8Xf;
ankerl::unordered_dense::map<record_typegj3e8Xf, uint32_t, transTypes<record_typegj3e8Xf, hasher>> gMzMTEvd;
gMzMTEvd.reserve(mont_8AE.size);
uint32_t* reversemap = new uint32_t[mont_8AE.size<<1],
*mapbase = reversemap + mont_8AE.size;
for (uint32_t i2E = 0; i2E < mont_8AE.size; ++i2E){
reversemap[i2E] = gMzMTEvd.hashtable_push(forward_as_tuple(mont_8AE[i2E]));
}
auto arr_values = gMzMTEvd.values().data();
auto arr_len = gMzMTEvd.size();
uint32_t* seconds = new uint32_t[gMzMTEvd.size()];
auto vecs = static_cast<vector_type<uint32_t>*>(malloc(sizeof(vector_type<uint32_t>) * arr_len));
vecs[0].init_from(arr_values[0].second, mapbase);
for (uint32_t i = 1; i < arr_len; ++i) {
vecs[i].init_from(arr_values[i].second, mapbase + arr_values[i - 1].second);
arr_values[i].second += arr_values[i - 1].second;
}
for (uint32_t i = 0; i < mont_8AE.size; ++i) {
auto id = reversemap[i];
mapbase[--arr_values[id].second] = i;
}
col_EeW23s.reserve(gMzMTEvd.size());
col_5gY1Dm.reserve(gMzMTEvd.size());
auto buf_col_5gY1Dm = new double[mont_8AE.size];
for (uint32_t i = 0; i < arr_len; ++i) {
col_5gY1Dm[i].init_from(vecs[i].size, buf_col_5gY1Dm + arr_values[i].second);
}
for (uint32_t i = 0; i < arr_len; ++i) {
auto &key_3iNX3qG = arr_values[i].first;
auto &val_7jjv8Mo = arr_values[i].second;
col_EeW23s.emplace_back(get<0>(key_3iNX3qG));
avgw(10, sales_2RB[vecs[i]], col_5gY1Dm[i]);
}
//print(*out_2LuaMH);
//FILE* fp_5LQeym = fopen("flatten.csv", "wb");
out_2LuaMH->printall(",", "\n", nullptr, nullptr, 10);
//fclose(fp_5LQeym);
puts("done.");
return 0;
}

@ -0,0 +1,51 @@
import struct
import readline
from typing import List
name : str = input('Filename (in path ./procedures/<filename>.aqp):')
def write():
s : str = input()
qs : List[str] = []
while(len(s) and not s.startswith('S')):
qs.append(s)
s = input()
ms : int = int(input())
with open(f'./procedures/{name}.aqp', 'wb') as fp:
fp.write(struct.pack("I", len(qs) + (ms > 0)))
fp.write(struct.pack("I", ms))
if (ms > 0):
fp.write(b'N\x00')
for q in qs:
fp.write(q.encode('utf-8'))
if q.startswith('Q'):
fp.write(b'\n ')
fp.write(b'\x00')
def read():
with open(f'./procedures/{name}.aqp', 'rb') as fp:
nq = struct.unpack("I", fp.read(4))[0]
ms = struct.unpack("I", fp.read(4))[0]
qs = fp.read().split(b'\x00')
print(f'Procedure {name}, {nq} queries, {ms} modules:')
for q in qs:
print(' ' + q.decode('utf-8'))
if __name__ == '__main__':
while True:
cmd = input("r for read, w for write: ")
if cmd.lower().startswith('r'):
read()
break
elif cmd.lower().startswith('w'):
write()
break
elif cmd.lower().startswith('q'):
break

@ -4,8 +4,8 @@ from enum import Enum, auto
from typing import Dict, List, Optional, Set, Tuple, Union
from engine.types import *
from engine.utils import (base62alp, base62uuid, enlist, get_innermost,
get_legal_name)
from engine.utils import (base62alp, base62uuid, enlist,
get_innermost, get_legal_name)
from reconstruct.storage import ColRef, Context, TableInfo
class ast_node:
@ -339,8 +339,8 @@ class projection(ast_node):
return ', '.join([self.pyname2cname[n.name] for n in lst_names])
else:
return self.pyname2cname[proj_name]
for key, val in proj_map.items():
gb_tovec = [False] * len(proj_map)
for i, (key, val) in enumerate(proj_map.items()):
if type(val[1]) is str:
x = True
y = get_proj_name
@ -357,22 +357,27 @@ class projection(ast_node):
out_typenames[key] = decltypestring
else:
out_typenames[key] = val[0].cname
if (type(val[2].udf_called) is udf and # should bulkret also be colref?
elemental_ret_udf = (
type(val[2].udf_called) is udf and # should bulkret also be colref?
val[2].udf_called.return_pattern == udf.ReturnPattern.elemental_return
or
self.group_node and
(self.group_node.use_sp_gb and
)
folding_vector_groups = (
self.group_node and
(
self.group_node.use_sp_gb and
val[2].cols_mentioned.intersection(
self.datasource.all_cols().difference(
self.datasource.get_joint_cols(self.group_node.refs)
))
) and val[2].is_compound # compound val not in key
# or
# val[2].is_compound > 1
# (not self.group_node and val[2].is_compound)
):
out_typenames[key] = f'vector_type<{out_typenames[key]}>'
self.out_table.columns[key].compound = True
)
)
) and
val[2].is_compound # compound val not in key
)
if (elemental_ret_udf or folding_vector_groups):
out_typenames[key] = f'vector_type<{out_typenames[key]}>'
self.out_table.columns[key].compound = True
if self.group_node is not None and self.group_node.use_sp_gb:
gb_tovec[i] = True
outtable_col_nameslist = ', '.join([f'"{c.name}"' for c in self.out_table.columns])
self.outtable_col_names = 'names_' + base62uuid(4)
self.context.emitc(f'const char* {self.outtable_col_names}[] = {{{outtable_col_nameslist}}};')
@ -384,12 +389,14 @@ class projection(ast_node):
gb_vartable : Dict[str, Union[str, int]] = deepcopy(self.pyname2cname)
gb_cexprs : List[str] = []
gb_colnames : List[str] = []
gb_types : List[Types] = []
for key, val in proj_map.items():
col_name = 'col_' + base62uuid(6)
self.context.emitc(f'decltype(auto) {col_name} = {self.out_table.contextname_cpp}->get_col<{key}>();')
gb_cexprs.append((col_name, val[2]))
gb_colnames.append(col_name)
self.group_node.finalize(gb_cexprs, gb_vartable, gb_colnames)
gb_types.append(val[0])
self.group_node.finalize(gb_cexprs, gb_vartable, gb_colnames, gb_types, gb_tovec)
else:
for i, (key, val) in enumerate(proj_map.items()):
if type(val[1]) is int:
@ -533,6 +540,7 @@ class groupby_c(ast_node):
def init(self, node : List[Tuple[expr, Set[ColRef]]]):
self.proj : projection = self.parent
self.glist : List[Tuple[expr, Set[ColRef]]] = node
self.vecs : str = 'vecs_' + base62uuid(3)
return super().init(node)
def produce(self, node : List[Tuple[expr, Set[ColRef]]]):
@ -561,21 +569,22 @@ class groupby_c(ast_node):
e = g_str
g_contents_list.append(e)
first_col = g_contents_list[0]
self.total_sz = 'len_' + base62uuid(4)
self.context.emitc(f'uint32_t {self.total_sz} = {first_col}.size;')
g_contents_decltype = [f'decays<decltype({c})::value_t>' for c in g_contents_list]
g_contents = ', '.join(
[f'{c}[{scanner_itname}]' for c in g_contents_list]
)
self.context.emitc(f'typedef record<{",".join(g_contents_decltype)}> {self.group_type};')
self.context.emitc(f'ankerl::unordered_dense::map<{self.group_type}, vector_type<uint32_t>, '
f'transTypes<{self.group_type}, hasher>> {self.group};')
self.context.emitc(f'{self.group}.reserve({first_col}.size);')
self.context.emitc(f'AQHashTable<{self.group_type}, '
f'transTypes<{self.group_type}, hasher>> {self.group} {{{self.total_sz}}};')
self.n_grps = len(self.glist)
self.scanner = scan(self, first_col + '.size', it_name=scanner_itname)
self.scanner.add(f'{self.group}[forward_as_tuple({g_contents})].emplace_back({self.scanner.it_var});')
self.scanner = scan(self, self.total_sz, it_name=scanner_itname)
self.scanner.add(f'{self.group}.hashtable_push(forward_as_tuple({g_contents}), {self.scanner.it_var});')
def consume(self, _):
self.scanner.finalize()
self.context.emitc(f'auto {self.vecs} = {self.group}.ht_postproc({self.total_sz});')
# def deal_with_assumptions(self, assumption:assumption, out:TableInfo):
# gscanner = scan(self, self.group)
# val_var = 'val_'+base62uuid(7)
@ -583,16 +592,42 @@ class groupby_c(ast_node):
# gscanner.add(f'{self.datasource.cxt_name}->order_by<{assumption.result()}>(&{val_var});')
# gscanner.finalize()
def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]], col_names : List[str]):
for c in col_names:
def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]],
col_names : List[str], col_types : List[Types], col_tovec : List[bool]):
tovec_columns = set()
for i, c in enumerate(col_names):
self.context.emitc(f'{c}.reserve({self.group}.size());')
gscanner = scan(self, self.group, loop_style = 'for_each')
if col_tovec[i]: # and type(col_types[i]) is VectorT:
typename : Types = col_types[i] # .inner_type
self.context.emitc(f'auto buf_{c} = static_cast<{typename.cname} *>(calloc({self.total_sz}, sizeof({typename.cname})));')
tovec_columns.add(c)
self.arr_len = 'arrlen_' + base62uuid(3)
self.arr_values = 'arrvals_' + base62uuid(3)
self.context.emitc(f'auto {self.arr_len} = {self.group}.size();')
self.context.emitc(f'auto {self.arr_values} = {self.group}.values();')
if len(tovec_columns):
preproc_scanner = scan(self, self.arr_len)
preproc_scanner_it = preproc_scanner.it_var
for c in tovec_columns:
preproc_scanner.add(f'{c}[{preproc_scanner_it}].init_from'
f'({self.vecs}[{preproc_scanner_it}].size,'
f' {"buf_" + c} + {self.group}.ht_base'
f'[{preproc_scanner_it}]);'
)
preproc_scanner.finalize()
self.context.emitc(f'GC::scratch_space = GC::gc_handle ? &(GC::gc_handle->scratch) : nullptr;')
# gscanner = scan(self, self.group, loop_style = 'for_each')
gscanner = scan(self, self.arr_len)
key_var = 'key_'+base62uuid(7)
val_var = 'val_'+base62uuid(7)
gscanner.add(f'auto &{key_var} = {gscanner.it_var}.first;', position = 'front')
gscanner.add(f'auto &{val_var} = {gscanner.it_var}.second;', position = 'front')
# gscanner.add(f'auto &{key_var} = {gscanner.it_var}.first;', position = 'front')
# gscanner.add(f'auto &{val_var} = {gscanner.it_var}.second;', position = 'front')
gscanner.add(f'auto &{key_var} = {self.arr_values}[{gscanner.it_var}];', position = 'front')
gscanner.add(f'auto &{val_var} = {self.vecs}[{gscanner.it_var}];', position = 'front')
len_var = None
def define_len_var():
nonlocal len_var
@ -627,7 +662,7 @@ class groupby_c(ast_node):
materialize_builtin = materialize_builtin,
count=lambda:f'{val_var}.size')
for ce in cexprs:
for i, ce in enumerate(cexprs):
ex = ce[1]
materialize_builtin = {}
if type(ex.udf_called) is udf:
@ -640,9 +675,18 @@ class groupby_c(ast_node):
materialize_builtin['_builtin_ret'] = f'{ce[0]}.back()'
gscanner.add(f'{ex.eval(c_code = True, y=get_var_names, materialize_builtin = materialize_builtin)};\n')
continue
gscanner.add(f'{ce[0]}.emplace_back({get_var_names_ex(ex)});\n')
if col_tovec[i]:
if ex.remake_binary(f'{ce[0]}[{gscanner.it_var}]'):
gscanner.add(f'{get_var_names_ex(ex)};\n')
else:
gscanner.add(f'{ce[0]}[{gscanner.it_var}] = {get_var_names_ex(ex)};\n')
else:
gscanner.add(f'{ce[0]}.emplace_back({get_var_names_ex(ex)});\n')
gscanner.add(f'GC::scratch_space->release();')
gscanner.finalize()
self.context.emitc(f'GC::scratch_space = nullptr;')
self.datasource.groupinfo = None
@ -718,10 +762,11 @@ class groupby(ast_node):
# self.parent.var_table.
self.parent.col_ext.update(l[1])
def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]], col_names : List[str]):
def finalize(self, cexprs : List[Tuple[str, expr]], var_table : Dict[str, Union[str, int]],
col_names : List[str], col_types : List[Types], col_tovec : List[bool]):
if self.use_sp_gb:
self.dedicated_gb = groupby_c(self.parent, self.dedicated_glist)
self.dedicated_gb.finalize(cexprs, var_table, col_names)
self.dedicated_gb.finalize(cexprs, var_table, col_names, col_types, col_tovec)
class join(ast_node):
@ -1300,7 +1345,7 @@ class outfile(ast_node):
filename = self.node['loc']['literal'] if 'loc' in self.node else self.node['literal']
sep = ',' if 'term' not in self.node else self.node['term']['literal']
file_pointer = 'fp_' + base62uuid(6)
self.addc(f'FILE* {file_pointer} = fopen("{filename}", "w");')
self.addc(f'FILE* {file_pointer} = fopen("{filename}", "wb");')
self.addc(f'{self.parent.out_table.contextname_cpp}->printall("{sep}", "\\n", nullptr, {file_pointer});')
self.addc(f'fclose({file_pointer});')
self.context.ccode += self.ccode

@ -367,6 +367,19 @@ class expr(ast_node):
self.curr_code += c.codegen(delegate)
return self.curr_code
def remake_binary(self, ret_expr):
if self.root:
self.oldsql = self.sql
if (self.opname in builtin_binary_ops):
patched_opname = 'aqop_' + self.opname
self.sql = (f'{patched_opname}({self.children[0].sql}, '
f'{self.children[1].sql}, {ret_expr})')
return True
elif self.opname in builtin_vecfunc:
self.sql = self.sql[:self.sql.rindex(')')]
self.sql += ', ' + ret_expr + ')'
return True
return False
def __str__(self):
return self.sql
def __repr__(self):

@ -1,5 +1,6 @@
#pragma once
#include "types.h"
#include "gc.h"
#include <utility>
#include <limits>
#include <deque>
@ -12,7 +13,7 @@ size_t count(const VT<T>& v) {
}
template <class T>
constexpr static inline size_t count(const T&) { return 1; }
constexpr static size_t count(const T&) { return 1; }
// TODO: Specializations for dt/str/none
template<class T, template<typename ...> class VT>
@ -29,14 +30,19 @@ double avg(const VT<T>& v) {
return (sum<T>(v) / static_cast<double>(v.size));
}
template<class T, template<typename ...> class VT, class Ret>
void sqrt(const VT<T>& v, Ret& ret) {
for (uint32_t i = 0; i < v.size; ++i)
ret[i] = sqrt(v[i]);
}
template<class T, template<typename ...> class VT>
VT<double> sqrt(const VT<T>& v) {
VT<double> ret(v.size);
for (uint32_t i = 0; i < v.size; ++i) {
ret[i] = sqrt(v[i]);
}
sqrt(v, ret);
return ret;
}
template <class T>
T truncate(const T& v, const uint32_t precision) {
auto multiplier = pow(10, precision);
@ -73,109 +79,153 @@ T min(const VT<T>& v) {
min_v = min_v < _v ? min_v : _v;
return min_v;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> mins(const VT<T>& arr) {
// simplify this using a template std::binary_function<T, T, bool> = std::less;
template<class T, template<typename ...> class VT, class Ret>
void mins(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
std::deque<std::pair<T, uint32_t>> cache;
decayed_t<VT, T> ret(len);
T min = std::numeric_limits<T>::max();
for (int i = 0; i < len; ++i) {
if (arr[i] < min)
min = arr[i];
ret[i] = min;
}
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> maxs(const VT<T>& arr) {
decayed_t<VT, T> mins(const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
mins(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void maxs(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
T max = std::numeric_limits<T>::min();
for (int i = 0; i < len; ++i) {
if (arr[i] > max)
max = arr[i];
ret[i] = max;
}
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> minw(uint32_t w, const VT<T>& arr) {
decayed_t<VT, T> maxs(const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
maxs(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void minw(uint32_t w, const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
std::deque<std::pair<T, uint32_t>> cache;
for (int i = 0; i < len; ++i) {
if (!cache.empty() && cache.front().second == i - w) cache.pop_front();
while (!cache.empty() && cache.back().first > arr[i]) cache.pop_back();
cache.push_back({ arr[i], i });
ret[i] = cache.front().first;
}
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> maxw(uint32_t w, const VT<T>& arr) {
decayed_t<VT, T> minw(uint32_t w, const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
minw(w, arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void maxw(uint32_t w, const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
std::deque<std::pair<T, uint32_t>> cache;
for (int i = 0; i < len; ++i) {
if (!cache.empty() && cache.front().second == i - w) cache.pop_front();
while (!cache.empty() && cache.back().first > arr[i]) cache.pop_back();
while (!cache.empty() && cache.back().first < arr[i]) cache.pop_back();
cache.push_back({ arr[i], i });
arr[i] = cache.front().first;
ret[i] = cache.front().first;
}
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<T>> ratiow(uint32_t w, const VT<T>& arr) {
inline decayed_t<VT, T> maxw(uint32_t w, const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
maxw(w, arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void ratiow(uint32_t w, const VT<T>& arr, Ret& ret) {
typedef std::decay_t<types::GetFPType<T>> FPType;
uint32_t len = arr.size;
if (arr.size <= w)
len = 1;
w = w > len ? len : w;
decayed_t<VT, FPType> ret(arr.size);
ret[0] = 0;
for (uint32_t i = 0; i < w; ++i)
ret[i] = arr[i] / (FPType)arr[0];
for (uint32_t i = w; i < arr.size; ++i)
ret[i] = arr[i] / (FPType) arr[i - w];
}
template<class T, template<typename ...> class VT>
inline decayed_t<VT, types::GetFPType<T>> ratiow(uint32_t w, const VT<T>& arr) {
typedef std::decay_t<types::GetFPType<T>> FPType;
decayed_t<VT, FPType> ret(arr.size);
ratiow(w, arr, ret);
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<T>> ratios(const VT<T>& arr) {
inline decayed_t<VT, types::GetFPType<T>> ratios(const VT<T>& arr) {
return ratiow(1, arr);
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetLongType<T>> sums(const VT<T>& arr) {
template<class T, template<typename ...> class VT, class Ret>
inline void ratios(const VT<T>& arr, Ret& ret) {
return ratiow(1, arr, ret);
}
template<class T, template<typename ...> class VT, class Ret>
void sums(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, types::GetLongType<T>> ret(len);
uint32_t i = 0;
if (len) ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = ret[i - 1] + arr[i];
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> avgs(const VT<T>& arr) {
inline decayed_t<VT, types::GetLongType<T>> sums(const VT<T>& arr) {
decayed_t<VT, types::GetLongType<T>> ret(arr.size);
sums(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void avgs(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
typedef types::GetFPType<types::GetLongType<T>> FPType;
decayed_t<VT, FPType> ret(len);
uint32_t i = 0;
types::GetLongType<T> s;
if (len) s = ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = (s += arr[i]) / (FPType)(i + 1);
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetLongType<T>> sumw(uint32_t w, const VT<T>& arr) {
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> avgs(const VT<T>& arr) {
typedef types::GetFPType<types::GetLongType<T>> FPType;
decayed_t<VT, FPType> ret(arr.size);
avgs(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void sumw(uint32_t w, const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, types::GetLongType<T>> ret(len);
uint32_t i = 0;
w = w > len ? len : w;
if (len) ret[i++] = arr[0];
@ -183,11 +233,17 @@ decayed_t<VT, types::GetLongType<T>> sumw(uint32_t w, const VT<T>& arr) {
ret[i] = ret[i - 1] + arr[i];
for (; i < len; ++i)
ret[i] = ret[i - 1] + arr[i] - arr[i - w];
return ret;
}
template<class T, template<typename ...> class VT>
void avgw(uint32_t w, const VT<T>& arr, decayed_t<vector_type, types::GetFPType<types::GetLongType<T>>>& ret) {
decayed_t<VT, types::GetLongType<T>> sumw(uint32_t w, const VT<T>& arr) {
decayed_t<VT, types::GetLongType<T>> ret(arr.size);
sumw(w, arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void avgw(uint32_t w, const VT<T>& arr, Ret& ret) {
typedef types::GetFPType<types::GetLongType<T>> FPType;
const uint32_t& len = arr.size;
uint32_t i = 0;
@ -201,26 +257,19 @@ void avgw(uint32_t w, const VT<T>& arr, decayed_t<vector_type, types::GetFPType<
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> avgw(uint32_t w, const VT<T>& arr) {
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> avgw(uint32_t w, const VT<T>& arr) {
typedef types::GetFPType<types::GetLongType<T>> FPType;
const uint32_t& len = arr.size;
decayed_t<VT, FPType> ret(len);
uint32_t i = 0;
types::GetLongType<T> s{};
w = w > len ? len : w;
if (len) s = ret[i++] = arr[0];
for (; i < w; ++i)
ret[i] = (s += arr[i]) / (FPType)(i + 1);
for (; i < len; ++i)
ret[i] = ret[i - 1] + (arr[i] - arr[i - w]) / (FPType)w;
avgw(w, arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, bool sd = false>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> varw(uint32_t w, const VT<T>& arr) {
template<class T, template<typename ...> class VT, class Ret, bool sd = false>
void varw(uint32_t w, const VT<T>& arr,
Ret& ret) {
using FPType = types::GetFPType<types::GetLongType<T>>;
const uint32_t& len = arr.size;
decayed_t<VT, FPType> ret(len);
uint32_t i = 0;
types::GetLongType<T> s{};
w = w > len ? len : w;
@ -252,7 +301,14 @@ decayed_t<VT, types::GetFPType<types::GetLongType<T>>> varw(uint32_t w, const VT
if constexpr(sd)
if(i)
ret[i-1] = sqrt(ret[i-1]);
}
template<class T, template<typename ...> class VT, bool sd = false>
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> varw(uint32_t w, const VT<T>& arr) {
using FPType = types::GetFPType<types::GetLongType<T>>;
decayed_t<VT, FPType> ret(arr.size);
varw<T, VT, decayed_t<VT, types::GetFPType<types::GetLongType<T>>>, sd>(w, arr, ret);
return ret;
}
@ -274,11 +330,10 @@ types::GetFPType<types::GetLongType<decays<T>>> var(const VT<T>& arr) {
return (ssq - s * s / (FPType)(len + 1)) / (FPType)(len + 1);
}
template<class T, template<typename ...> class VT, bool sd = false>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> vars(const VT<T>& arr) {
template<class T, template<typename ...> class VT, class Ret, bool sd = false>
void vars(const VT<T>& arr, Ret& ret) {
typedef types::GetFPType<types::GetLongType<T>> FPType;
const uint32_t& len = arr.size;
decayed_t<VT, FPType> ret(len);
uint32_t i = 0;
types::GetLongType<T> s{};
FPType MnX{};
@ -298,70 +353,103 @@ decayed_t<VT, types::GetFPType<types::GetLongType<T>>> vars(const VT<T>& arr) {
ret[i] = MnX / (FPType)(i + 1);
if constexpr(sd) ret[i] = sqrt(ret[i]);
}
}
template<class T, template<typename ...> class VT, bool sd = false>
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> vars(const VT<T>& arr) {
typedef types::GetFPType<types::GetLongType<T>> FPType;
decayed_t<VT, FPType> ret(arr.size);
vars<T, VT, decayed_t<VT, types::GetFPType<types::GetLongType<T>>>, sd>(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT>
types::GetFPType<types::GetLongType<decays<T>>> stddev(const VT<T>& arr) {
inline types::GetFPType<types::GetLongType<decays<T>>> stddev(const VT<T>& arr) {
return sqrt(var(arr));
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> stddevs(const VT<T>& arr) {
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> stddevs(const VT<T>& arr) {
return vars<T, VT, true>(arr);
}
template<class T, template<typename ...> class VT>
decayed_t<VT, types::GetFPType<types::GetLongType<T>>> stddevw(uint32_t w, const VT<T>& arr) {
inline decayed_t<VT, types::GetFPType<types::GetLongType<T>>> stddevw(uint32_t w, const VT<T>& arr) {
return varw<T, VT, true>(w, arr);
}
template<class T, template<typename ...> class VT, class Ret>
inline auto stddevs(const VT<T>& arr, Ret& ret) {
return vars<T, VT, Ret, true>(arr, ret);
}
template<class T, template<typename ...> class VT, class Ret>
inline auto stddevw(uint32_t w, const VT<T>& arr, Ret& ret) {
return varw<T, VT, Ret, true>(w, arr, ret);
}
// use getSignedType
template<class T, template<typename ...> class VT>
decayed_t<VT, T> deltas(const VT<T>& arr) {
template<class T, template<typename ...> class VT, class Ret>
void deltas(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
uint32_t i = 0;
if (len) ret[i++] = 0;
for (; i < len; ++i)
ret[i] = arr[i] - arr[i - 1];
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> prev(const VT<T>& arr) {
inline decayed_t<VT, T> deltas(const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
deltas(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void prev(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
uint32_t i = 0;
if (len) ret[i++] = arr[0];
for (; i < len; ++i)
ret[i] = arr[i - 1];
return ret;
}
template<class T, template<typename ...> class VT>
decayed_t<VT, T> aggnext(const VT<T>& arr) {
inline decayed_t<VT, T> prev(const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
prev(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT, class Ret>
void aggnext(const VT<T>& arr, Ret& ret) {
const uint32_t& len = arr.size;
decayed_t<VT, T> ret(len);
uint32_t i = 1;
for (; i < len; ++i)
ret[i - 1] = arr[i];
if (len > 0) ret[len - 1] = arr[len - 1];
}
template<class T, template<typename ...> class VT>
inline decayed_t<VT, T> aggnext(const VT<T>& arr) {
decayed_t<VT, T> ret(arr.size);
aggnext(arr, ret);
return ret;
}
template<class T, template<typename ...> class VT>
T last(const VT<T>& arr) {
if (!arr.size) return 0;
const uint32_t& len = arr.size;
return arr[arr.size - 1];
}
template<class T, template<typename ...> class VT>
T first(const VT<T>& arr) {
if (!arr.size) return 0;
const uint32_t& len = arr.size;
return arr[0];
}
#define __DEFAULT_AGGREGATE_FUNCTION__(NAME, RET) \
template <class T> constexpr T NAME(const T& v) { return RET; }

@ -1,9 +1,43 @@
#pragma once
#ifndef __AQ_USE_THREADEDGC__
#include <atomic>
class GC {
private:;
class ScratchSpace {
public:
void* ret;
char* scratchspace;
size_t ptr;
size_t cnt;
size_t capacity;
size_t initial_capacity;
void* temp_memory_fractions;
//uint8_t status;
// record maximum size
constexpr static uint8_t Grow = 0x1;
// no worry about overflow
constexpr static uint8_t Use = 0x0;
void init(size_t initial_capacity);
// apply for memory
void* alloc(uint32_t sz);
void register_ret(void* ret);
// reorganize memory space
void release();
// reset status of the scratch space
void reset();
// reset scratch space to initial capacity.
void cleanup();
};
#ifndef __AQ_USE_THREADEDGC__
class GC {
private:
size_t max_slots,
interval, forced_clean,
forceclean_timer = 0;
@ -18,7 +52,6 @@ private:;
std::atomic<uint64_t> current_size;
volatile bool lock;
using gc_deallocator_t = void (*)(void*);
// maybe use volatile std::thread::id instead
protected:
void acquire_lock();
@ -29,28 +62,38 @@ protected:
void terminate_daemon();
public:
void reg(void* v, uint32_t sz = 1,
ScratchSpace scratch;
void reg(void* v, uint32_t sz = 0xffffffff,
void(*f)(void*) = free
);
uint32_t get_threshold() const {
return threshould;
}
GC(
uint64_t max_size = 0xfffffff, uint32_t max_slots = 4096,
uint32_t interval = 10000, uint32_t forced_clean = 1000000,
uint32_t threshould = 64 //one seconds
uint32_t threshould = 64, //one seconds
uint32_t scratch_sz = 0x1000000 // 16 MB
) : max_size(max_size), max_slots(max_slots),
interval(interval), forced_clean(forced_clean),
threshould(threshould) {
start_deamon();
GC::gc_handle = this;
this->scratch.init(1);
} // 256 MB
~GC(){
terminate_daemon();
scratch.cleanup();
}
static GC* gc_handle;
static ScratchSpace *scratch_space;
template <class T>
constexpr static inline gc_deallocator_t _delete(T*){
static inline gc_deallocator_t _delete(T*) {
return [](void* v){
delete (T*)v;
};

@ -132,7 +132,3 @@ namespace ankerl::unordered_dense{
struct hash<std::tuple<Types...>> : public hasher<Types...>{ };
}
struct aq_hashtable_value_t {
uint32_t id;
uint32_t cnt;
};

@ -4,6 +4,7 @@
#include <string>
#include <limits>
#include <cstring>
#include <string_view>
template <class ...Types>
std::string generate_printf_string(const char* sep = " ", const char* end = "\n") {
std::string str;
@ -25,6 +26,11 @@ inline decltype(auto) print_hook<bool>(const bool& v) {
return v? "true" : "false";
}
template<>
inline decltype(auto) print_hook<std::string_view>(const std::string_view& v) {
return v.data();
}
extern char* gbuf;
void setgbuf(char* buf = 0);

@ -55,6 +55,7 @@ void print<bool>(const bool&v, const char* delimiter){
std::cout<< (v?"true":"false") << delimiter;
}
template<class T>
T getInt(const char*& buf){
T ret = 0;
@ -451,6 +452,9 @@ void GC::reg(void* v, uint32_t sz, void(*f)(void*)) { //~ 40ns expected v. free
f(v);
return;
}
else if (sz == 0xffffffff)
sz = this->threshould;
auto _q = static_cast<memoryqueue_t>(q);
while(lock);
++alive_cnt;
@ -464,6 +468,72 @@ void GC::reg(void* v, uint32_t sz, void(*f)(void*)) { //~ 40ns expected v. free
#endif
inline GC* GC::gc_handle = nullptr;
inline ScratchSpace* GC::scratch_space = nullptr;
void ScratchSpace::init(size_t initial_capacity) {
ret = nullptr;
scratchspace = static_cast<char*>(malloc(initial_capacity));
ptr = cnt = 0;
capacity = initial_capacity;
this->initial_capacity = initial_capacity;
temp_memory_fractions = new vector_type<void*>();
}
inline void* ScratchSpace::alloc(uint32_t sz){
ptr = this->cnt;
this->cnt += sz; // major cost
if (this->cnt > capacity) {
[[unlikely]]
capacity = this->cnt + (capacity >> 1);
auto vec_tmpmem_fractions = static_cast<vector_type<char *>*>(temp_memory_fractions);
vec_tmpmem_fractions->emplace_back(scratchspace);
scratchspace = static_cast<char*>(malloc(capacity));
ptr = 0;
}
return scratchspace + ptr;
}
inline void ScratchSpace::register_ret(void* ret){
this->ret = ret;
}
inline void ScratchSpace::release(){
ptr = cnt = 0;
auto vec_tmpmem_fractions =
static_cast<vector_type<void*>*>(temp_memory_fractions);
if (vec_tmpmem_fractions->size) {
[[unlikely]]
for(auto& mem : *vec_tmpmem_fractions){
//free(mem);
GC::gc_handle->reg(mem);
}
vec_tmpmem_fractions->clear();
}
}
inline void ScratchSpace::reset() {
this->release();
ret = nullptr;
if (capacity != initial_capacity){
capacity = initial_capacity;
scratchspace = static_cast<char*>(realloc(scratchspace, capacity));
}
}
void ScratchSpace::cleanup(){
auto vec_tmpmem_fractions =
static_cast<vector_type<void*>*>(temp_memory_fractions);
if (vec_tmpmem_fractions->size) {
for(auto& mem : *vec_tmpmem_fractions){
free(mem);
//GC::gc_handle->reg(mem);
}
vec_tmpmem_fractions->clear();
}
delete vec_tmpmem_fractions;
free(this->scratchspace);
}
#include "dragonbox/dragonbox_to_chars.hpp"
@ -537,4 +607,11 @@ aq_to_chars<types::timestamp_t>(void* value, char* buffer) {
return buffer;
}
template<>
char*
aq_to_chars<std::string_view>(void* value, char* buffer){
const auto& src = *static_cast<std::string_view*>(value);
memcpy(buffer, src.data(), src.size());
return buffer + src.size();
}

@ -161,6 +161,7 @@ template<> char* aq_to_chars<char*>(void* , char*);
template<> char* aq_to_chars<types::date_t>(void* , char*);
template<> char* aq_to_chars<types::time_t>(void* , char*);
template<> char* aq_to_chars<types::timestamp_t>(void* , char*);
template<> char* aq_to_chars<std::string_view>(void* , char*);
typedef int (*code_snippet)(void*);
template <class _This_Struct>

@ -6,6 +6,8 @@
#include "monetdb_conn.h"
#include "monetdbe.h"
#include "table.h"
#include <thread>
#undef ERROR
#undef static_assert
@ -73,11 +75,11 @@ void Server::connect(Context *cxt){
printf("Error: Server %p already connected. Restart? (Y/n). \n", server);
char c[50];
std::cin.getline(c, 49);
for(int i = 0; i < 50; ++i){
for(int i = 0; i < 50; ++i) {
if (!c[i] || c[i] == 'y' || c[i] == 'Y'){
monetdbe_close(*server);
free(*server);
this->server = 0;
this->server = nullptr;
break;
}
else if(c[i]&&!(c[i] == ' ' || c[i] == '\t'))
@ -86,7 +88,10 @@ void Server::connect(Context *cxt){
}
server = (monetdbe_database*)malloc(sizeof(monetdbe_database));
auto ret = monetdbe_open(server, nullptr, nullptr);
monetdbe_options ops;
AQ_ZeroMemory(ops);
ops.nr_threads = std::thread::hardware_concurrency();
auto ret = monetdbe_open(server, nullptr, &ops);
if (ret == 0){
status = true;
this->server = server;
@ -148,8 +153,7 @@ void Server::print_results(const char* sep, const char* end){
szs [i] = monetdbe_type_szs[cols[i]->type];
header_string = header_string + cols[i]->name + sep + '|' + sep;
}
const size_t l_sep = strlen(sep) + 1;
if (header_string.size() - l_sep >= 0)
if (const size_t l_sep = strlen(sep) + 1; header_string.size() >= l_sep)
header_string.resize(header_string.size() - l_sep);
header_string += end + std::string(header_string.size(), '=') + end;
fputs(header_string.c_str(), stdout);

@ -191,6 +191,21 @@ constexpr prt_fn_t monetdbe_prtfns[] = {
aq_to_chars<std::nullptr_t>
};
#ifndef __AQ_USE_THREADEDGC__
void aq_init_gc(void *handle, Context* cxt)
{
typedef void (*aq_gc_init_t) (Context*);
if (handle && cxt){
auto sym = dlsym(handle, "__AQ_Init_GC__");
if(sym){
((aq_gc_init_t)sym)(cxt);
}
}
}
#else //__AQ_USE_THREADEDGC__
#define aq_init_gc(h, c)
#endif //__AQ_USE_THREADEDGC__
#include "monetdbe.h"
#undef max
#undef min
@ -280,6 +295,7 @@ void initialize_module(const char* module_name, void* module_handle, Context* cx
printf("Warning: module %s have no session support.\n", module_name);
}
}
#pragma endregion
int dll_main(int argc, char** argv, Context* cxt){
aq_timer timer;
@ -363,12 +379,7 @@ start:
recorded_queries.emplace_back(copy_lpstr("N"));
}
handle = dlopen(proc_name, RTLD_NOW);
#ifndef __AQ_USE_THREADEDGC__
{
typedef void (*aq_gc_init_t) (Context*);
((aq_gc_init_t)dlsym(handle, "__AQ_Init_GC__"))(cxt);
}
#endif
aq_init_gc(handle, cxt);
if (procedure_recording) {
recorded_libraries.emplace_back(handle);
}
@ -474,11 +485,13 @@ start:
p.__rt_loaded_modules = static_cast<void**>(
malloc(sizeof(void*) * p.postproc_modules));
for(uint32_t j = 0; j < p.postproc_modules; ++j){
auto pj = dlopen(p.name, RTLD_NOW);
auto pj = dlopen((procedure_root + p.name + std::to_string(j) + ".so").c_str(), RTLD_NOW);
if (pj == nullptr){
printf("Error: failed to load module %s\n", p.name);
return true;
}
aq_init_gc(pj, cxt);
p.__rt_loaded_modules[j] = pj;
}
}
@ -503,6 +516,7 @@ start:
};
const auto& load_proc_fromfile = [&](StoredProcedure& p) {
auto config_name = procedure_root + p.name + ".aqp";
puts(p.name);
auto fp = fopen(config_name.c_str(), "rb");
if(fp == nullptr){
puts("ERROR: Procedure not found on disk.");
@ -517,14 +531,17 @@ start:
p.queries = static_cast<char**>(malloc(sizeof(char*) * p.cnt));
p.queries[0] = static_cast<char*>(malloc(sizeof(char) * queries_size));
fread(&p.queries[0], queries_size, 1, fp);
fread(p.queries[0], 1, queries_size, fp);
for(uint32_t j = 1; j < p.cnt; ++j){
p.queries[j] = p.queries[j-1];
while(*p.queries[j] != '\0')
while(*(p.queries[j]) != '\0')
++p.queries[j];
++p.queries[j];
puts(p.queries[j-1]);
}
fclose(fp);
p.__rt_loaded_modules = 0;
return load_modules(p);
};
switch(n_recvd[i][1]){
@ -553,18 +570,22 @@ start:
auto _proc = cxt->stored_proc.find(proc_name);
if (_proc == cxt->stored_proc.end()){
printf("Procedure %s not found. Trying load from disk.\n", proc_name);
if (load_proc_fromfile(current_procedure)){
current_procedure.name = copy_lpstr(proc_name);
if (!load_proc_fromfile(current_procedure)){
cxt->stored_proc.insert_or_assign(proc_name, current_procedure);
}
else {
continue;
}
}
else{
current_procedure = _proc->second;
n_recv = current_procedure.cnt;
n_recvd = current_procedure.queries;
load_modules(current_procedure);
procedure_replaying = true;
goto start; // yes, I know, refactor later!!
}
n_recv = current_procedure.cnt;
n_recvd = current_procedure.queries;
load_modules(current_procedure);
procedure_replaying = true;
goto start; // yes, I know, refactor later!!
}
break;
case 'D': // delete procedure
@ -572,6 +593,9 @@ start:
case 'S': //save procedure
break;
case 'L': //load procedure
if (!load_proc_fromfile(current_procedure)) {
cxt->stored_proc.insert_or_assign(proc_name, current_procedure);
}
break;
case 'd': // display all procedures
for(const auto& p : cxt->stored_proc){

@ -10,6 +10,7 @@
#include <algorithm>
#include <cstdarg>
#include <vector>
#include <string_view>
#include "io.h"
#include "hasher.h"
@ -35,7 +36,8 @@ struct ColRef_cstorage {
int ty; // what if enum is not int?
};
template <template <class...> class VT, class T>
template <template <class...> class VT, class T,
std::enable_if_t<std::is_base_of_v<vector_base<T>, VT<T>>>* = nullptr>
std::ostream& operator<<(std::ostream& os, const VT<T>& v)
{
v.out();
@ -142,7 +144,7 @@ public:
vector_type<_Ty>::operator=(vt);
return *this;
}
ColRef<_Ty>& operator =(ColRef<_Ty>&& vt) {
ColRef<_Ty>& operator =(ColRef<_Ty>&& vt) noexcept {
vector_type<_Ty>::operator=(std::move(vt));
return *this;
@ -289,6 +291,7 @@ public:
uint32_t len = end - start;
return ColView<_Ty>(orig, idxs.subvec(start, end));
}
ColRef<_Ty> subvec_deep(uint32_t start, uint32_t end) const {
uint32_t len = end - start;
ColRef<_Ty> subvec(len);
@ -329,7 +332,7 @@ template<class ...Types> struct TableInfo;
template<class ...Types> struct TableView;
template <long long _Index, bool order = true, class... _Types>
constexpr inline auto& get(const TableInfo<_Types...>& table) noexcept {
constexpr auto& get(const TableInfo<_Types...>& table) noexcept {
if constexpr (order)
return *(ColRef<std::tuple_element_t<_Index, std::tuple<_Types...>>> *) & (table.colrefs[_Index]);
else
@ -337,7 +340,7 @@ constexpr inline auto& get(const TableInfo<_Types...>& table) noexcept {
}
template <long long _Index, class... _Types>
constexpr inline ColRef<std::tuple_element_t<_Index, std::tuple<_Types...>>>& get(const TableView<_Types...>& table) noexcept {
constexpr ColRef<std::tuple_element_t<_Index, std::tuple<_Types...>>>& get(const TableView<_Types...>& table) noexcept {
return *(ColRef<std::tuple_element_t<_Index, std::tuple<_Types...>>> *) & (table.info.colrefs[_Index]);
}
@ -348,9 +351,6 @@ struct is_vector_impl<ColView<V>> : std::true_type {};
template <class V>
struct is_vector_impl<vector_type<V>> : std::true_type {};
template<class ...Types>
struct TableView;
template<class ...Types>
struct TableInfo {
const char* name;
@ -459,8 +459,7 @@ struct TableInfo {
std::string header_string = std::string();
for (uint32_t i = 0; i < sizeof...(Types); ++i)
header_string += std::string(this->colrefs[i].name) + sep + '|' + sep;
const size_t l_sep = strlen(sep) + 1;
if (header_string.size() - l_sep >= 0)
if (const size_t l_sep = strlen(sep) + 1; header_string.size() >= l_sep)
header_string.resize(header_string.size() - l_sep);
header_string += end + std::string(header_string.size(), '=') + end;
return header_string;
@ -487,6 +486,7 @@ struct TableInfo {
if (header_string.size() - l_sep >= 0)
header_string.resize(header_string.size() - l_sep);
}
const auto& prt_loop = [&fp, &view, &printf_string, *this, &limit](const auto& f) {
#ifdef __AQ__HAS__INT128__
constexpr auto num_hge = count_type<__int128_t, __uint128_t>((tuple_type*)(0));
@ -531,7 +531,7 @@ struct TableInfo {
}
}
template <int ...vals> struct applier {
inline constexpr static void apply(const TableInfo<Types...>& t, const char* __restrict sep = ",", const char* __restrict end = "\n",
constexpr static void apply(const TableInfo<Types...>& t, const char* __restrict sep = ",", const char* __restrict end = "\n",
const vector_type<uint32_t>* __restrict view = nullptr, FILE* __restrict fp = nullptr, uint32_t limit = std::numeric_limits<uint32_t>::max()
)
{
@ -656,11 +656,11 @@ struct TableView {
};
template <class T>
constexpr static inline bool is_vector(const ColRef<T>&) {
constexpr static bool is_vector(const ColRef<T>&) {
return true;
}
template <class T>
constexpr static inline bool is_vector(const vector_type<T>&) {
constexpr static bool is_vector(const vector_type<T>&) {
return true;
}
@ -910,6 +910,42 @@ VT<bool> operator >(const T2& lhs, const VT<T1>& rhs) {
return ret;
}
#define _AQ_OP_(x) __AQ_OP__##x
#define __AQ_OP__add +
#define __AQ_OP__minus -
#define __AQ_OP__div *
#define __AQ_OP__mul /
#define __AQ_OP__and &
#define __AQ_OP__or |
#define __AQ_OP__xor ^
#define __AQ_OP__gt >
#define __AQ_OP__lt <
#define __AQ_OP__gte >=
#define __AQ_OP__lte <=
#define __AQ_OP__eq ==
#define __AQ_OP__neq !=
#define __D_AQOP(x) \
template <class T1, class T2, template<typename> class VT, class Ret>\
void aqop_##x (const VT<T1>& lhs, const VT<T2>& rhs, Ret& ret){\
for (uint32_t i = 0; i < ret.size; ++i)\
ret[i] = lhs[i] _AQ_OP_(x) rhs[i];\
}
__D_AQOP(add)
__D_AQOP(minus)
__D_AQOP(div)
__D_AQOP(mul)
__D_AQOP(and)
__D_AQOP(or)
__D_AQOP(xor)
__D_AQOP(gt)
__D_AQOP(lt)
__D_AQOP(gte)
__D_AQOP(lte)
__D_AQOP(eq)
__D_AQOP(neq)
template <class ...Types>
void print(const TableInfo<Types...>& v, const char* delimiter = " ", const char* endline = "\n") {
@ -919,6 +955,7 @@ template <class ...Types>
void print(const TableView<Types...>& v, const char* delimiter = " ", const char* endline = "\n") {
v.print(delimiter, endline);
}
template <class T>
void print(const T& v, const char* delimiter = " ") {
std::cout << v << delimiter;
@ -933,7 +970,6 @@ void print<__uint128_t>(const __uint128_t& v, const char* delimiter);
#endif
template <>
void print<bool>(const bool& v, const char* delimiter);
template <class T>
void inline print_impl(const T& v, const char* delimiter, const char* endline) {
for (const auto& vi : v) {

@ -13,7 +13,7 @@ long long testing_throughput(uint32_t n_jobs, bool prompt = true){
auto tp = ThreadPool(thread::hardware_concurrency());
getchar();
auto i = 0u;
fp = fopen("tmp.tmp", "w");
fp = fopen("tmp.tmp", "wb");
auto time = chrono::high_resolution_clock::now();
while(i++ < n_jobs) tp.enqueue_task({ [](void* f) {fprintf(fp, "%d ", *(int*)f); free(f); }, new int(i) });
puts("done dispatching.");
@ -53,7 +53,7 @@ long long testing_transaction(uint32_t n_burst, uint32_t n_batch,
}
long long testing_destruction(bool prompt = true){
fp = fopen("tmp.tmp", "w");
fp = fopen("tmp.tmp", "wb");
if (prompt) {
puts("Press any key to start.");
getchar();

@ -3,6 +3,9 @@
#include <cstdint>
#include <type_traits>
#include <tuple>
#include <string_view>
#include <string>
#include <utility>
using std::size_t;
#if defined(__SIZEOF_INT128__) and not defined(_WIN32)
@ -13,6 +16,9 @@ using std::size_t;
#define __restrict__ __restrict
#endif
template<class T>
struct vector_base {};
template <class T>
constexpr static inline bool is_vector(const T&) {
return false;
@ -32,23 +38,23 @@ struct aqis_same_impl {
std::conditional_t<
std::is_same_v<T1, bool> || std::is_same_v<T2, bool>,
std::bool_constant<std::is_same_v<T1, bool> && std::is_same_v<T2, bool>>,
Cond(
(std::is_same_v<T1, bool> && std::is_same_v<T2, bool>),
std::true_type,
std::false_type
),
Cond(
std::is_signed_v<T1> == std::is_signed_v<T2>,
Cond(
std::is_floating_point_v<T1> == std::is_floating_point_v<T2>,
Cond(
aq_szof<T1> == aq_szof<T2>, // deal with sizeof(void)
std::true_type,
std::false_type
),
std::false_type
),
std::false_type
!(std::is_class_v<T1> || std::is_class_v<T2>),
Cond(
std::is_signed_v<T1> == std::is_signed_v<T2>,
Cond(
std::is_floating_point_v<T1> == std::is_floating_point_v<T2>,
std::bool_constant<aq_szof<T1> == aq_szof<T2>>, // deal with sizeof(void)
std::false_type
),
std::false_type
),
Cond(
(std::is_class_v<T1> && std::is_class_v<T2>),
std::bool_constant<(std::is_base_of_v<T1, T2> || std::is_base_of_v<T2, T1>)>,
std::false_type
)
)
>::value;
};
@ -63,12 +69,12 @@ constexpr bool aqis_same<T1, T2> = aqis_same_impl<T1, T2>::value;
namespace types {
enum Type_t {
AINT32, AFLOAT, ASTR, ADOUBLE, ALDOUBLE, AINT64, AINT128, AINT16, ADATE, ATIME, AINT8,
AUINT32, AUINT64, AUINT128, AUINT16, AUINT8, ABOOL, VECTOR, ATIMESTAMP, ACHAR, NONE, ERROR
AUINT32, AUINT64, AUINT128, AUINT16, AUINT8, ABOOL, VECTOR, ATIMESTAMP, ACHAR, ASV, NONE, ERROR
};
static constexpr const char* printf_str[] = { "%d", "%f", "%s", "%lf", "%Lf", "%ld", "%d", "%hi", "%s", "%s", "%hhd",
"%u", "%lu", "%s", "%hu", "%hhu", "%s", "%s", "Vector<%s>", "%s", "%c", "NULL", "ERROR" };
static constexpr const char* printf_str[] = { "%d", "%f", "%s", "%lf", "%Lf", "%ld", "%s", "%hi", "%s", "%s", "%hhd",
"%u", "%lu", "%s", "%hu", "%hhu", "%s", "Vector<%s>", "%s", "%c", "%s", "NULL", "ERROR" };
static constexpr const char* SQL_Type[] = { "INT", "REAL", "TEXT", "DOUBLE", "DOUBLE", "BIGINT", "HUGEINT", "SMALLINT", "DATE", "TIME", "TINYINT",
"INT", "BIGINT", "HUGEINT", "SMALLINT", "TINYINT", "BOOL", "HUGEINT", "TIMESTAMP", "CHAR", "NULL", "ERROR"};
"INT", "BIGINT", "HUGEINT", "SMALLINT", "TINYINT", "BOOL", "HUGEINT", "TIMESTAMP", "CHAR", "TEXT", "NULL", "ERROR"};
// TODO: deal with data/time <=> str/uint conversion
@ -169,6 +175,8 @@ namespace types {
f(unsigned short, AUINT16) \
f(bool, ABOOL) \
f(timestamp_t, ATIMESTAMP) \
f(std::string_view, ASV) \
f(std::string, ASV) \
F_INT128(f)
inline constexpr static Type_t getType() {
@ -399,7 +407,6 @@ struct transValues_s<lT<vT, T...>, vT, rT> {
using type = rT<T...>;
};
#include <utility>
template <class vT, int i, template <vT ...> class rT>
using transValues = typename transValues_s<std::make_integer_sequence<vT, i>, vT, rT>::type;
template <int i, template <int ...> class rT>
@ -427,8 +434,17 @@ template <class ...T>
using get_first = typename get_first_impl<T...>::first;
template <class T>
struct value_type_rec_impl { typedef T type; };
template <template <class...> class VT, class ...V>
struct value_type_rec_impl<VT<V...>> { typedef typename value_type_rec_impl<get_first<V...>>::type type; };
struct value_type_rec_impl<VT<V...>> {
typedef typename
std::conditional_t<
std::is_base_of_v<vector_base<get_first<V...>>, VT<V...>>,
typename value_type_rec_impl<get_first<V...>>::type,
VT<V...>
> type;
};
template <class T>
using value_type_r = typename value_type_rec_impl<T>::type;

@ -1059,15 +1059,13 @@ public:
return do_insert_or_assign(std::move(key), std::forward<M>(mapped)).first;
}
template <class K>
unsigned hashtable_push(K&& key) {
auto it_isinserted = try_emplace(std::forward<K>(key), 1);
if (!it_isinserted.second) {
++ it_isinserted.first->second;
return static_cast<unsigned>(it_isinserted.first - begin());
}
return static_cast<unsigned>(end() - begin() - 1);
}
// template <class K>
// bool hashtable_push(K&& key) {
// auto it_isinserted = try_emplace(std::forward<K>(key), 1);
// if (!it_isinserted.second)
// ++ it_isinserted.first->second;
// return it_isinserted.second;
// }
template <typename K,
typename M,
@ -1112,6 +1110,66 @@ public:
return {begin() + static_cast<difference_type>(value_idx), true};
}
template <class K,
typename Q = T,
typename H = Hash,
typename KE = KeyEqual>//,
//std::enable_if_t<!is_map_v<Q> && is_transparent_v<H, KE>, bool> = true>
auto hashtable_push(K&& key) -> unsigned {
if (is_full()) {
increase_size();
}
auto hash = mixed_hash(key);
auto dist_and_fingerprint = dist_and_fingerprint_from_hash(hash);
auto bucket_idx = bucket_idx_from_hash(hash);
while (dist_and_fingerprint <= at(m_buckets, bucket_idx).m_dist_and_fingerprint) {
if (dist_and_fingerprint == at(m_buckets, bucket_idx).m_dist_and_fingerprint &&
m_equal(key, m_values[at(m_buckets, bucket_idx).m_value_idx])) {
// found it, return without ever actually creating anything
return static_cast<uint32_t>(at(m_buckets, bucket_idx).m_value_idx);
}
dist_and_fingerprint = dist_inc(dist_and_fingerprint);
bucket_idx = next(bucket_idx);
}
// value is new, insert element first, so when exception happens we are in a valid state
m_values.emplace_back(std::forward<K>(key));
// now place the bucket and shift up until we find an empty spot
auto value_idx = static_cast<value_idx_type>(m_values.size() - 1);
place_and_shift_up({dist_and_fingerprint, value_idx}, bucket_idx);
return static_cast<uint32_t>(value_idx);
}
// template <class... Args>
// auto hashtable_push(Args&&... args) -> unsigned {
// if (is_full()) {
// increase_size();
// }
// // we have to instantiate the value_type to be able to access the key.
// // 1. emplace_back the object so it is constructed. 2. If the key is already there, pop it later in the loop.
// auto& key = get_key(m_values.emplace_back(std::forward<Args>(args)...));
// auto hash = mixed_hash(key);
// auto dist_and_fingerprint = dist_and_fingerprint_from_hash(hash);
// auto bucket_idx = bucket_idx_from_hash(hash);
// while (dist_and_fingerprint <= at(m_buckets, bucket_idx).m_dist_and_fingerprint) {
// if (dist_and_fingerprint == at(m_buckets, bucket_idx).m_dist_and_fingerprint &&
// m_equal(key, get_key(m_values[at(m_buckets, bucket_idx).m_value_idx]))) {
// m_values.pop_back(); // value was already there, so get rid of it
// return static_cast<uint32_t>(at(m_buckets, bucket_idx).m_value_idx);
// }
// dist_and_fingerprint = dist_inc(dist_and_fingerprint);
// bucket_idx = next(bucket_idx);
// }
// // value is new, place the bucket and shift up until we find an empty spot
// auto value_idx = static_cast<value_idx_type>(m_values.size() - 1);
// place_and_shift_up({dist_and_fingerprint, value_idx}, bucket_idx);
// return static_cast<uint32_t>(value_idx);
// }
template <class... Args>
auto emplace(Args&&... args) -> std::pair<iterator, bool> {
if (is_full()) {

@ -17,8 +17,6 @@
#include "types.h"
#include "gc.h"
#pragma pack(push, 1)
template<class T>
struct vector_base {};
struct vectortype_cstorage{
void* container;
@ -32,7 +30,6 @@ public:
void inline _copy(const vector_type<_Ty>& vt) {
// quick init while using malloc
//if (capacity > 0) free(container);
this->size = vt.size;
this->capacity = vt.capacity;
if (capacity) {
@ -52,26 +49,33 @@ public:
this->container = vt.container;
// puts("move");
vt.size = vt.capacity = 0;
vt.container = 0;
vt.container = nullptr;
}
public:
_Ty* container;
uint32_t size, capacity;
typedef _Ty* iterator_t;
typedef std::conditional_t<is_cstr<_Ty>(), astring_view, _Ty> value_t;
vector_type(const uint32_t& size) : size(size), capacity(size) {
explicit vector_type(const uint32_t& size) : size(size), capacity(size) {
if (GC::scratch_space != nullptr) {
[[likely]]
container = (_Ty*)GC::scratch_space->alloc(size * sizeof(_Ty));
}
container = (_Ty*)malloc(size * sizeof(_Ty));
// TODO: calloc for objects.
}
constexpr vector_type(std::initializer_list<_Ty> _l) {
explicit constexpr vector_type(std::initializer_list<_Ty> _l) {
size = capacity = _l.size();
_Ty* _container = this->container = (_Ty*)malloc(sizeof(_Ty) * _l.size());
this->container = (_Ty*)malloc(sizeof(_Ty) * capacity);
_Ty* _container = this->container;
for (const auto& l : _l) {
*(_container++) = l;
}
}
constexpr vector_type() noexcept : size(0), capacity(0), container(0) {};
constexpr vector_type(_Ty* container, uint32_t len) noexcept : size(len), capacity(0), container(container) {};
constexpr vector_type(const char** container, uint32_t len,
typename std::enable_if_t<!std::is_same_v<_Ty, const char*>>* = nullptr) noexcept = delete;
constexpr explicit vector_type(const vector_type<_Ty>& vt) noexcept : capacity(0) {
_copy(vt);
}
@ -81,8 +85,9 @@ public:
constexpr vector_type(vector_type<_Ty>&& vt) noexcept : capacity(0) {
_move(std::move(vt));
}
vector_type(vectortype_cstorage vt) noexcept : capacity(vt.capacity), size(vt.size), container((_Ty*)vt.container) {
out(10);
explicit vector_type(vectortype_cstorage vt) noexcept :
capacity(vt.capacity), size(vt.size), container((_Ty*)vt.container) {
// out(10);
};
// size >= capacity ==> readonly vector
constexpr vector_type(const uint32_t size, void* data) :
@ -188,15 +193,15 @@ public:
grow<false>(sz);
}
void emplace_back(const _Ty& _val) {
inline void emplace_back(const _Ty& _val) {
grow();
container[size++] = _val;
}
void emplace_back(_Ty& _val) {
inline void emplace_back(_Ty& _val) {
grow();
container[size++] = std::move(_val);
}
void emplace_back(_Ty&& _val) {
inline void emplace_back(_Ty&& _val) {
grow();
container[size++] = std::move(_val);
}
@ -213,10 +218,10 @@ public:
return _it;
}
iterator_t begin() const {
inline iterator_t begin() const {
return container;
}
iterator_t end() const {
inline iterator_t end() const {
return container + size;
}
@ -230,7 +235,7 @@ public:
return container[_i];
}
void shrink_to_fit() {
inline void shrink_to_fit() {
if (size && capacity != size) {
capacity = size;
_Ty* _container = (_Ty*)malloc(sizeof(_Ty) * size);
@ -240,13 +245,17 @@ public:
}
}
_Ty& back() {
inline void clear() {
this->size = 0;
}
inline _Ty& back() {
return container[size - 1];
}
void qpop() {
inline void qpop() {
size = size ? size - 1 : size;
}
void pop_resize() {
inline void pop_resize() {
if (size) {
--size;
if (capacity > (size << 1))
@ -259,7 +268,7 @@ public:
}
}
}
_Ty pop() {
inline _Ty pop() {
return container[--size];
}
void merge(vector_type<_Ty>& _other) {
@ -331,7 +340,7 @@ public:
inline vector_type<_Ty> subvec_deep(uint32_t start = 0) const { return subvec_deep(start, size); }
vector_type<_Ty> getRef() { return vector_type<_Ty>(container, size); }
~vector_type() {
if (capacity > 0);// GC::gc_handle->reg(container, sizeof(_Ty) * capacity);//free(container);
if (capacity > 0) GC::gc_handle->reg(container, sizeof(_Ty) * capacity);//free(container);
container = 0; size = capacity = 0;
}
#define Compare(_op) \
@ -369,7 +378,7 @@ public:
#define Ops(o, x) \
template<typename T>\
vector_type<typename types::Coercion<_Ty, T>::type> operator o (const vector_type<T>& r) const {\
/*[[likely]] if (r.size == size) {*/\
/*if (r.size == size) { [[likely]] */\
return x(r);\
/*}*/\
}
@ -377,7 +386,7 @@ public:
#define Opseq(o, x) \
template<typename T>\
vector_type<typename types::Coercion<_Ty, T>::type> operator o##= (const vector_type<T>& r) {\
/*[[likely]] if (r.size == size) {*/\
/*if (r.size == size) { [[likely]] */\
return x##eq(r);\
/*}*/\
}
@ -395,6 +404,52 @@ public:
_Make_Ops(Opseq)
};
template <>
constexpr vector_type<std::string_view>::vector_type(const char** container, uint32_t len,
typename std::enable_if_t<true>*) noexcept
{
size = capacity = len;
this->container = static_cast<std::string_view*>(
malloc(sizeof(std::string_view) * len));
for(uint32_t i = 0; i < len; ++i){
this->container[i] = container[i];
}
}
template<>
constexpr vector_type<std::string_view>::vector_type(const uint32_t size, void* data) :
size(size), capacity(0) {
this->container = static_cast<std::string_view*>(
malloc(sizeof(std::string_view) * size));
for(uint32_t i = 0; i < size; ++i){
this->container[i] = ((const char**)data)[i];
}
//std::cout<<size << container[1];
}
// template<>
// void vector_type<std::string_view>::init_from(const uint32_t size, void* data) {
// this->size = this->capacity = size;
// this->container = static_cast<std::string_view*>(
// malloc(sizeof(std::string_view) * size));
// for(uint32_t i = 0; i < size; ++i){
// this->container[i] = container[i];
// }
// }
// template<template <typename> class VT>
// inline void
// prealloc_vector (VT &vt, uint32_t sz) {
// vt.reserve(sz);
// }
// template<class T>
// inline void
// prealloc_vector (vector_type<vector_type<T>> &vt,
// uint32_t outer_sz, uint32_t inner_sz) {
// vt.reserve(outer_sz);
// auto mem = static_cast<T*>(malloc(inner_sz * sizeof(T)));
// }
template <>
class vector_type<void> {
@ -428,4 +483,48 @@ public:
vector_type<void> subvec_deep(uint32_t);
};
#pragma pack(pop)
template <class Key, class Hash>
class AQHashTable : public ankerl::unordered_dense::set<Key, Hash> {
public:
uint32_t* reversemap, *mapbase, *ht_base;
AQHashTable() = default;
explicit AQHashTable(uint32_t sz)
: ankerl::unordered_dense::set<Key, Hash>{} {
this->reserve(sz);
reversemap = static_cast<uint32_t *>(malloc(sizeof(uint32_t) * sz * 2));
mapbase = reversemap + sz;
ht_base = static_cast<uint32_t *>(calloc(sz, sizeof(uint32_t)));
}
void init(uint32_t sz) {
ankerl::unordered_dense::set<Key, Hash>::reserve(sz);
reversemap = static_cast<uint32_t *>(malloc(sizeof(uint32_t) * sz * 2));
mapbase = reversemap + sz;
ht_base = static_cast<uint32_t *>(calloc(sz, sizeof(uint32_t)));
}
inline void hashtable_push(Key&& k, uint32_t i){
reversemap[i] = ankerl::unordered_dense::set<Key, Hash>::hashtable_push(std::move(k));
++ht_base[reversemap[i]];
}
auto ht_postproc(uint32_t sz) {
auto& arr_values = this->values();
const auto& len = this->size();
auto vecs = static_cast<vector_type<uint32_t>*>(malloc(sizeof(vector_type<uint32_t>) * len));
vecs[0].init_from(ht_base[0], mapbase);
for (uint32_t i = 1; i < len; ++i) {
vecs[i].init_from(ht_base[i], mapbase + ht_base[i - 1]);
ht_base[i] += ht_base[i - 1];
}
for (uint32_t i = 0; i < sz; ++i) {
auto id = reversemap[i];
mapbase[--ht_base[id]] = i;
}
return vecs;
}
};
#endif

Loading…
Cancel
Save