Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
Aaaaaaron committed May 8, 2021
1 parent 1b4ad25 commit 3ee8606
Show file tree
Hide file tree
Showing 105 changed files with 135 additions and 145 deletions.
2 changes: 0 additions & 2 deletions Calcite - Parser 部分.md
Original file line number Diff line number Diff line change
Expand Up @@ -351,11 +351,9 @@ graph LR;
```

- SqlNode: 抽象语法树(AST), 树状结构.
- ![image-20200112112033715](Calcite 01.assets/image-20200112112033715.png)
- RelNode: 逻辑执行计划节点, 如TableScan, Project, Sort, Join等, 树状结构.
- 继承自 RelOptNode, 代表能被优化器进行优化
- `RelTraitSet#getTraitSet();`用来定义逻辑表的物理相关属性(分布/排序)
- ![image-20200112112335849](Calcite 01.assets/image-20200112112335849.png)

## SQL 解析阶段(SQL–>SqlNode)

Expand Down
6 changes: 3 additions & 3 deletions Callback与-Coroutine-协程概念说明.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,19 +13,19 @@ tags:
**分清楚内核层和应用层是关键**

**阻塞还是非阻塞** 应用程序的调用是否立即返回. 被挂起无法执行其他操作的则是阻塞型的, 可以被立即「抽离」去完成其他「任务」的则是非阻塞型的.
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/20191009205805.png)
![](Callback与-Coroutine-协程概念说明/20191009205805-20210508114757190.png)

但是挂起就不能干事情了吗, 答案是否定的, 一个线程读文件,被阻塞了,资源会出让(陈力就列, 不能者止). coroutine也是, 但是比如 goroutine, go的调度器会把处于阻塞的的go程上的资源分配给其他go程. 但是这里的重点就是线程切换的代价比协程切换的代价高很多(线程切换涉及到内核态和用户态的切换)

**协程线程** 调度一个主动(协作式调度, 应用程序自己调度) 一个被动(抢占式调度, 操作系统调度)

**异步和同步** 数据 copy 时进程是否阻塞. 同步:应用层自己去想内核询问(轮询?); 异步:内核主动通知应用层数据. IO 操作分为两个过程:内核等待事件, 把数据拷贝到用户缓冲区. 这两个过程只要等待任何一个都是同步 IO

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/20191009205836.png)
![](Callback与-Coroutine-协程概念说明/20191009205836-20210508114803795.png)

在异步非阻塞模型中, 没有无谓的挂起、休眠与等待, 也没有盲目无知的问询与检查, 应用层做到不等候片刻的最大化利用自身的资源, 系统内核也十分「善解人意」的在完成任务后主动通知应用层来接收任务成果.

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/20191011223017.png)
![](Callback与-Coroutine-协程概念说明/20191011223017-20210508114805873.png)

### 进程、线程、协程

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 12 additions & 9 deletions Designing-Data-Intensive-Applications-Storage.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
---
title: Designing Data-Intensive Applications(Storage)
date: 2018-08-26 19:30:40
tags: BigData
tags:
- BigData
- DDIA
---
# Index 基础
log:泛指 append only 的记录序列.不一定要人可读, 可以是 binary 的.
Expand All @@ -15,17 +17,18 @@ index 的也就是存出一些额外的信息, 这样的话帮助你定位到你
## Hash Index
要找 key 为123456的 content, 只需要先在内存 map 中找到它的 byte offset , 只要一次 seek 过去,读到和下一个 key 的 offset相减的 content. 读的很精准.

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/893370.jpg)
![](Designing-Data-Intensive-Applications-Storage/893370.jpg)
> Storing a log of key-value pairs in a CSV-like format, indexed with an in-memory hash map.

通过追加的方式, 可能会造成一个文件太大了, 解决方案是把当 log 到一定 size, 就拆分一个新 segment 文件. 后面我们可以对这些 segments 进行 compaction. compact 会去除 dup 的 key, 留下最新的版本.

还可以对多个 seg 进行 merge.
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/39145451.jpg)
![](Designing-Data-Intensive-Applications-Storage/39145451.jpg)

>对一个 seg 进行 compact
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/13193271.jpg)
![](Designing-Data-Intensive-Applications-Storage/13193271.jpg)
>对 seg 进行 compact, 同时进行 merge.
### 局限
Expand All @@ -37,15 +40,15 @@ index 的也就是存出一些额外的信息, 这样的话帮助你定位到你

### 优势
1. 可以 merge 比内存还大的多的 seg(使用 merge sort), 对于多个 seg 中都出现的值, 只需要留最新的 seg 中的值就可以了.
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/65990847.jpg)
![](Designing-Data-Intensive-Applications-Storage/65990847.jpg)

2. index 不用保留所有的 key, 因为你所有的 key 都是保序的, 所以只要有几个作为base, 其他的可以在这几个 base 之间去找. 假设你正在内存中寻找键 handiwork,但是你不知道段文件中该关键字的确切偏移量. 然而,你知道 handbag 和 handsome 的偏移,而且由于排序特性,你知道 handiwork 必须出现在这两者之间. 这意味着您可以跳到 handbag 的偏移位置并从那里扫描,直到您找到 handiwork(或没找到,如果该文件中没有该键)

这样, 你内存中的索引就可以很稀疏, 每几千字节的段文件就有一个键就足够了,因为几千字节可以很快被扫描.

如果所有的 k-v 都是 fix length 的, 你可以用 binary search. 这样可以省去整个内存中的索引.不过 in practice 一般都是变长的.

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/28391204.jpg)
![](Designing-Data-Intensive-Applications-Storage/28391204.jpg)

### 维护和构建 SSTables
在磁盘上维护一个有序的数据结构是可能的(见 B Tree), 但是在内存中维护会更加简单.类似的结构有 red-black trees, AVL tress.
Expand Down Expand Up @@ -76,15 +79,15 @@ index 的也就是存出一些额外的信息, 这样的话帮助你定位到你
每个页面都可以使用地址或位置来标识, 这允许一个页面引用另一个页面 —— 类似于指针, 但在磁盘而不是在内存中. 我们可以使用这些页面引用来构建一个页面树.

我们要找251, 于是在[200, 300] 中间找
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/89272936.jpg)
![](Designing-Data-Intensive-Applications-Storage/89272936.jpg)

在 B 树的一个页面中对子页面的引用的数量称为分支因子, 在上面图中, 分支因子是6. 在实践中, 分支因子取决于存储页面参考和范围边界所需的空间量, 但通常是几百个

如果要更新 B 树中现有键的值, 则搜索包含该键的叶页, 更改该页中的值, 并将该页写回到磁盘(对该页的任何引用保持有效). 如果你想添加一个新的键, 你需要找到其范围包含新键的页面, 并将其添加到该页面.如果页面中没有足够的可用空间容纳新键, 则将其分成两个半满页面, 并更新父页面以解释键范围的新分区

该算法确保树保持平衡:具有 n 个键的 B 树总是具有 O(log n)的深度.大多数数据库可以放入一个三到四层的 B 树, 所以你不需要遵追踪多页面引用来找到你正在查找的页面. 分支因子为 500 的 4KB 页面的四级树可以存储多达 256TB.

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-8-24/30223544.jpg)
![](Designing-Data-Intensive-Applications-Storage/30223544.jpg)

B 树的基本底层写操作是用新数据覆盖磁盘上的页面, 但是引用不变, 这个和 LSM Tree 正好相反(只附加, 从不修改文件).

Expand Down Expand Up @@ -112,7 +115,7 @@ B 树的基本底层写操作是用新数据覆盖磁盘上的页面, 但是引

# 列存储
如果一列的值, count 很大, 但是基数不大, 可以使用 bit map. 可以很高效的压缩.
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-9-1/37827590.jpg)
![](Designing-Data-Intensive-Applications-Storage/37827590.jpg)

## vectorized process
除了减少需要从磁盘加载的数据量以外, 面向列的存储布局也可以有效利用 CPU 周期. 例如 query engine 可以把一块(chunk) 的 compressed column data(为了单位信息密度更大) 放到 CPU 的 L1 cache. 前面说的按位与/或可以直接 apply 到这些 chunk 上.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 8 additions & 8 deletions Druid-Storage-原理.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ tags:
- Druid
- BigData
---
转载自[编程小梦](https://blog.bcmeng.com/post/druid-storage.html), 该博主一系列文章质量都非常高
转载自[编程小梦](https://blog.bcmeng.com/post/druid-storage.html)

### What is Druid

Expand All @@ -20,7 +20,7 @@ Druid 是一个开源的实时 OLAP 系统,可以对超大规模数据提供

为了能够提取利用大数据的商业价值,我们必然需要对数据进行分析,尤其是多维分析, 但是在几年前,整个业界并没有一款很好的 OLAP 工具,各种多维分析的方式如下图所示:

![屏幕快照 2017-10-31 下午8.27.50.png-1080.8kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/55283969.jpg)
![](Druid-Storage-原理/55283969.jpg)

其中直接基于 Hive,MR,Spark 的方式查询速度一般十分慢,并发低;而传统的关系型数据库无法支撑大规模数据;以 HBase 为代表的 NoSQL 数据库也无法提供高效的过滤,聚合能力。正因为现有工具有着各种各样的痛点,Druid 应运而生,以下几点自然是其设计目标:

Expand All @@ -31,7 +31,7 @@ Druid 是一个开源的实时 OLAP 系统,可以对超大规模数据提供

### Druid 架构

![image.png-181kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/98456054.jpg)
![image.png-181kB](Druid-Storage-原理/98456054.jpg)

Druid 的整体架构如上图所示,其中主要有 3 条路线:

Expand Down Expand Up @@ -75,7 +75,7 @@ Deep storage (S3 and HDFS) 是作为 Segment 的永久备份,查询时同样

### Column

![屏幕快照 2017-10-27 下午3.45.05.png-278kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/34009295.jpg)
![](Druid-Storage-原理/34009295.jpg)

Druid 中的列主要分为 3 类:时间列,维度列,指标列。Druid 在数据摄入和查询时都依赖时间列,这也是合理的,因为多维分析一般都带有时间维度。维度和指标是 OLAP 系统中常见的概念,维度主要是事件的属性,在查询时一般用来 filtering 和 group by,指标是用来聚合和计算的,一般是数值类型,像 count,sum,min,max 等。

Expand All @@ -87,7 +87,7 @@ Druid 中的维度列支持 String,Long,Float,不过只有 String 类型

### Segment 的存储格式

![image.png-90kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/29577015.jpg)
<img src="Druid-Storage-原理/29577015.jpg" alt="image.png-90kB" style="zoom:50%;" />

Druid segment 的存储格式如上图所示,包含 3 部分:

Expand All @@ -105,7 +105,7 @@ smoosh 文件中还有 index.drd 文件和 metadata.drd 文件,其中 index.dr

我们先来看指标列的存储格式:

![image.png-35.9kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/8107893.jpg)
<img src="Druid-Storage-原理/8107893.jpg" alt="image.png-35.9kB" style="zoom:50%;" />

指标列的存储格式如上图所示:

Expand All @@ -132,7 +132,7 @@ Druid 对于 HyperUnique,Cardinality,Histogram,Sketch 等复杂指标不

### String 维度的存储格式

![image.png-81.2kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/79137485.jpg)
![image.png-81.2kB](Druid-Storage-原理/79137485.jpg)

String 维度的存储格式如上图所示,前面提到过,时间列,维度列,指标列由两部分组成:ColumnDescriptor 和 binary 数据。 String 维度的 binary 数据主要由 3 部分组成:dict,字典编码后的 id 数组,用于倒排索引的 bitmap。

Expand All @@ -155,7 +155,7 @@ String 维度的存储格式如上图所示,前面提到过,时间列,维

### Segment load 过程

![meta.png-44.3kB](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-10-6/34569900.jpg)
<img src="Druid-Storage-原理/34569900.jpg" alt="meta.png-44.3kB" style="zoom:50%;" />

1. Read version
2. Load segment to MappedByteBuffer
Expand Down
Binary file added Druid-Storage-原理/29577015.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/34009295.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/34569900.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/55283969.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/79137485.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/8107893.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Druid-Storage-原理/98456054.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions Hadoop-MR-和-Spark-对比.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@
title: 'Hadoop MR 和 Spark 对比'
date: 2018-11-02 15:21:13
tags:
- MR
- Spark
- BigData
---
### 0. 启动开销
总结: Spark 计算比 MapReduce 快的根本原因在于DAG计算模型, 但 MR 真正的缺点是抽象层次太低, 大量底层逻辑需要开发者手工完成. 但是也不是说 MR 就已经没用了, 没有最好的技术, 只有合适你需求的技术.
Expand Down
2 changes: 1 addition & 1 deletion Java-Streaming-Deep-Dive.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,4 +154,4 @@ deleteOnExit 不一定会成功 如果文件有流还没关闭.

FileInputStream 类都是操作一个文件的接口,注意到在创建一个 FileInputStream 对象时,会创建一个 FileDescriptor 对象,其实这个对象就是真正代表一个存在的文件对象的描述,当我们在操作一个文件对象时可以通过 getFD() 方法获取真正操作的与底层操作系统关联的文件描述。例如可以调用 FileDescriptor.sync() 方法将操作系统缓存中的数据强制刷新到物理磁盘中。

![](https://www.ibm.com/developerworks/cn/java/j-lo-javaio/image015.jpg)
![](Java-Streaming-Deep-Dive/image015-20210508110323870.jpg)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 4 additions & 3 deletions Learning-Parquet.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ title: Learning Parquet
date: 2018-10-30 19:09:46
tags:
- Parquet
- BigData
---
# Glossary

Expand All @@ -20,11 +21,11 @@ tags:
- IO - Column chunk
- Encoding/Compression - Page

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/FileLayout.gif)
![](Learning-Parquet/FileLayout.gif)

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/FileFormat.gif)
![](Learning-Parquet/FileFormat.gif)

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/Parquet%20%E6%96%87%E4%BB%B6%E6%A0%BC%E5%BC%8F.png)
![](Learning-Parquet/Parquet 文件格式.png)

# Deep
正是因为有 row group, 才使得 parquet 文件 splitable, 且每个 split 都有完整的 record(Spark 中也是根据 row group 来切 split 的).
Expand Down
Binary file added Learning-Parquet/FileFormat.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Learning-Parquet/FileLayout.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Learning-Parquet/Parquet 文件格式.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 9 additions & 10 deletions Maven-打包趟坑与解法.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,24 +27,23 @@ tags:

parquet-column 里会 shade 一个 fasttuil, 你 jar -tf parquet-column.jar 看他 里面会有这个 fastutil.

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-11-8/50973815.jpg)
![](Maven-打包趟坑与解法/50973815.jpg)

可以看到有两个 jar 包, 一个 origin 不带 shade 的 fastutil, 另外一个是带着的, 也是放到 maven 仓库的 jar 包.
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-11-8/29736635.jpg)
![](Maven-打包趟坑与解法/29736635.jpg)

jar -tf 确认
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-11-8/57385130.jpg)
![](Maven-打包趟坑与解法/57385130.jpg)


### 问题定位一般方法
0. 当你遇到 `java.lang.NoClassDefFoundError` 等错误的时候, 如果是在 IDEA 里运行的, 很有可能是是 provided 依赖. 具体可以先看 IDEA 中打出的 classpath 里有没有依赖的包,![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/5.png), 然后看看 iml 里的 scope ![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/11871540974627_.pic_hd.jpg). 解决方法是 把 iml 里的都改成 compile 或者在 IDEA 中勾选上这个:
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/11861540974584_.pic.jpg)
0. 当你遇到 `java.lang.NoClassDefFoundError` 等错误的时候, 如果是在 IDEA 里运行的, 很有可能是 provided 依赖. 具体可以先看 IDEA 中打出的 classpath 里有没有依赖的包,![](Maven-打包趟坑与解法/5.png)

1. 如果有遇到什么 NoSuchMethodError, ClassNotFoundException 等等的, 先看看打印出来的 classpath. IDEA 里可以直接看:
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-11-8/76336665.jpg)
1. 如果有遇到什么 NoSuchMethodError, ClassNotFoundException 等等的, 先看看打印出来的 classpath. IDEA 里可以直接看, ClassNotFoundException 是真的没有这个 class:
![](Maven-打包趟坑与解法/76336665.jpg)

2. 然后可以 double shift, 搜下出问题的类, 一般会跳出来多个:
![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-11-8/65076854.jpg)
![](Maven-打包趟坑与解法/65076854.jpg)

3. 然后再用 `mvn dependency:tree` 看下当前 model 用的哪个版本的依赖

Expand All @@ -60,7 +59,7 @@ jar -tf 确认

但是无论从`mvn dependency:tree`, 还是运行时加载的 jar 包来看, 都是用了正确的 `jackson-databind-2.6.5.jar`. 问题就刁钻在它用的这个类, 其实不是 `jackson-databind` 里的, 而是其他的包里 shaed 但是又没有 relocation 的. 除非你把这个包给从依赖李去掉, 在这个包的里面的依赖里去掉, 或者最外面加正确版本的`jackson-databind-2.6.5.jar`都是没有用的, 见下图:

![](https://aron-blog-1257818292.cos.ap-shanghai.myqcloud.com/18-9-14/40098244.jpg)
![](Maven-打包趟坑与解法/40098244.jpg)

所以画框里他 exclusive 也是没有用的. 解决方法就是我们做成 external 的, 并且 exclude 掉.

Expand Down Expand Up @@ -202,4 +201,4 @@ maven 的不同 scope 的官方定义:

This is much like compile, but indicates you expect the JDK or a container to provide the dependency at runtime. For example, when building a web application for the Java Enterprise Edition, you would set the dependency on the Servlet API and related Java EE APIs to scope provided because the web container provides those classes. This scope is only available on the compilation and test classpath, and is not transitive.

我们经常回用到 `-pl :moduleName`, 看着很奇怪, 其实:前面省略的是 groupId.
我们经常回用到 `-pl :moduleName`, 看着很奇怪, 其实:前面省略的是 groupId.
Binary file added Maven-打包趟坑与解法/29736635.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Maven-打包趟坑与解法/40098244.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Maven-打包趟坑与解法/5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Maven-打包趟坑与解法/50973815.jpg
Binary file added Maven-打包趟坑与解法/57385130.jpg
Binary file added Maven-打包趟坑与解法/65076854.jpg
Binary file added Maven-打包趟坑与解法/76336665.jpg
23 changes: 2 additions & 21 deletions Parquet-encoding-definitions-official.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,31 +5,12 @@ tags:
- Parquet
- BigData
---
<!--
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements. See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership. The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied. See the License for the
- specific language governing permissions and limitations
- under the License.
-->

[Parquet encoding definitions](https://github.com/apache/parquet-format/blob/master/Encodings.md)
====

This file contains the specification of all supported encodings.

### <a name="PLAIN"></a>Plain: (PLAIN = 0)
### Plain: (PLAIN = 0)

Supported Types: all

Expand Down Expand Up @@ -264,4 +245,4 @@ sequence of strings, store the prefix length of the previous entry plus the suff
For a longer description, see https://en.wikipedia.org/wiki/Incremental_encoding.

This is stored as a sequence of delta-encoded prefix lengths (DELTA_BINARY_PACKED), followed by
the suffixes encoded as delta length byte arrays (DELTA_LENGTH_BYTE_ARRAY).
the suffixes encoded as delta length byte arrays (DELTA_LENGTH_BYTE_ARRAY).
Loading

0 comments on commit 3ee8606

Please sign in to comment.