You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+72-49
Original file line number
Diff line number
Diff line change
@@ -1,56 +1,42 @@
1
-
# The Kore Language
1
+
# Symbolic Execution Engine for the K Framework
2
2
3
-
Kore is the "core" part of the K framework.
3
+
## Structure of this project
4
4
5
-
## What is Kore all about?
5
+
This project implements the tools for symbolically executing programs in languages specified using the [K Framework]. The symbolic execution is performed at the level of `Kore` --- an intermediate representation produced by the K compiled from a language specification.
6
6
7
-
In short, we need a formal semantics of K.
8
-
In K, users can define formal syntax and semantics of
9
-
programming languages as K definitions, and automatically obtain
10
-
parsers, interpreters, compilers, and various verification tools
11
-
for their languages.
12
-
Therefore K is a language-independent framework.
7
+
The K framework is a term rewriting-based specification language, and therefore its symbolic execution backend implements a symbolic term rewriter. Such a rewriter is implemented by the `kore-rpc-booster` executable --- a JSON RPC-based server that implements the [KORE RPC protocol].
13
8
14
-
Thanks to years of research in matching logic and reachability
15
-
logic, we know that all K does can be nicely formalized as
16
-
logic reasoning in matching logic.
17
-
To give K a formal semantics, we only need to formally specify
18
-
the underlying matching logic theories with which K does reasoning.
19
-
In practice, these underlying theories are complex and often
20
-
infinite, and it is tricky to specify infinite theories without
21
-
a carefully designed formal specification language.
22
-
And Kore is such a language.
9
+
The `kore-rpc` executable is a legacy version of the RPC-based rewriter that implements the same protocol. Finally, `kore-exec` and `kore-repl` are the deprecated non-RPC entry points.
23
10
24
-
## Structure of this project
11
+
Note that this project is low-level and is not intended to be a user-facing tool. For the Python-based theorem prover based on the KORE RPC protocol, see the [pyk] package in the [K Framework] repository.
12
+
13
+
### Kompiling a K definition and running the RPC server
25
14
26
-
The [docs] directory contains a collection of documents
27
-
that describe the mathematical foundation of Kore and a BNF grammar
The `kore-rpc-booster` binary takes a `kore` file definition, parses and internalises it and then launches an RPC server, which executes requests agains this definition. It additionally accepts a path to a dynamic library compiled by the LLVM backend, which is used for simplification of bool sorted terms. In order to build the kore definition and the shared library out of a K definition, first call
16
+
17
+
```
18
+
kompile --llvm-kompile-type c my_defintion.k
19
+
```
31
20
32
-
The `kore` project is an implementation in Haskell of a Kore parser and symbolic execution engine,
Besides [git], you will need [stack] or [cabal] to build `kore`.
29
+
Besides [git], you will need [stack] or [cabal] to build the project.
38
30
39
31
```sh
40
-
stack build kore
32
+
stack build all
41
33
# or
42
-
cabal build kore
34
+
cabal build all
43
35
```
44
36
45
37
You may pass `--fast` to `stack build` or `-O0` to `cabal build` in order to
46
38
disable compiler optimizations and make build faster at the cost of slower runtime.
47
39
48
-
Using [make]:
49
-
50
-
```sh
51
-
make all # builds all binaries
52
-
```
53
-
54
40
## Developing
55
41
56
42
Developers will require all the dependencies listed above,
@@ -68,27 +54,15 @@ You can install/have access to K by either:
68
54
* using [kup]
69
55
* using a pre-built binary (see the releases page in the K repository)
70
56
* if you use Nix, see the section below
71
-
* using the `Dockerfile` to run the integration tests inside a container
72
57
* or by just building K from source
73
58
74
-
### Running integration tests with Docker
75
-
76
-
Use `docker.sh` to run commands inside the container:
77
-
78
-
```sh
79
-
./docker/build.sh # run once when dependencies change
80
-
./docker/run.sh make all # build the backend
81
-
./docker/run.sh make test# run all tests
82
-
./docker/run.sh make -C test/imp test# run all tests in test/imp
83
-
```
84
-
85
59
### Recommended dependencies
86
60
87
61
For setting up a development environment, we recommend:
88
62
89
63
-[direnv] to make the project's tools available in shells and editors.
90
64
-[ghcup] or [Nix] for managing required Haskell tooling
91
-
-[hlint] for compliance with project guidelines.
65
+
-[hlint]and [fourmolu]for compliance with project guidelines.
92
66
93
67
### Running Haskell Language Server
94
68
@@ -100,11 +74,15 @@ set up your [Nix] environment to build the server.
100
74
Prequisite: build the project with either Stack or Cabal.
101
75
102
76
Instructions on integrating with VSCode:
103
-
1. Install the [Haskell extension]
77
+
1. Install the [Haskell extension]
104
78
2. Go to `Extension Settings` and pick `GHCup` in the `Manage HLS` dropdown
105
79
3. (Optional) Manually set the GHCup executable path
106
80
4. (Extra) Set `Formatting Provider` to `fourmolu` for correct formatting
107
81
82
+
You may also need to install the [nix-env-selector](https://marketplace.visualstudio.com/items?itemName=arrterian.nix-env-selector) extension.
83
+
84
+
The `nix-env-selector` extension may prompt for the workspace to be re-loaded. Once re-loaded, HLS should start working.
85
+
108
86
### Developing with Nix
109
87
110
88
To build and run nix based packages at RV, please follow these instructions to set up nix:
@@ -178,6 +156,18 @@ nix run .#format Foo.hs
178
156
We provide a development nix shell with a suitable development environment and
179
157
a binary cache at [runtimeverification.cachix.org]. The development can be launched via `nix develop` and then calling `stack build/test/etc`.
180
158
159
+
### Nix-direnv
160
+
161
+
Using a version of direnv that works with nix (https://github.com/nix-community/nix-direnv) allows seamless loading and unloading of the nix shell, which adds all the required packages such as `cabal`, `hpack`, `fourmolu`, etc. Use the above link to install `nix-direnv`, making sure to hook direnv into whichever shell you are using (https://direnv.net/docs/hook.html). You can use the default nix shell (currently GHC version 9.6.4) by running
162
+
163
+
```bash
164
+
echo"use flake"> .envrc
165
+
```
166
+
167
+
Finally, run `direnv allow` inside the repo folder to load up the nix shell.
168
+
169
+
Note that only `cabal` currently works within the nix shell and since it does not support the HPack `package.yaml` file format, any changes to this file will require running `hpack` before they are picked up by cabal.
170
+
181
171
### Upgrading dependencies
182
172
183
173
When one of the package description files (`kore.cabal`, `kore-rpc-types.cabal`) changes, or when upgrading to a newer `stack` resolver, the dependencies need to be consolidated to avoid accidental breakage from incompatible up-stream updates. We use a `cabal.project.freeze` file to pin the dependencies to what the current `stack` resolver is using.
Call these scripts from the root of the repo to obtain performance numbers for the KEVM and Kontrol test suites. These are necessary for any new feature which is expected to modify the perfromance of the booster and the timings should be includedf in the PR.
214
+
215
+
216
+
#### scripts/booster-analysis.sh
217
+
218
+
This scipt can be used with any folder containing bug reports to build an anlysis of fallback/abort reasons in the booster. To obtain bug reports, first run `PYTEST_PARALLEL=8 scripts/performance-tests-kevm.sh --bug-report`, which will generate tarballs for all the tests and drop them into `scripts/bug-reports/`. Then call `scripts/booster-analysis.sh scripts/booster-analysis.sh scripts/bug-reports/kevm-v1.0.417-main`
219
+
220
+
221
+
### Generating an integration test from a bug-report.tar.gz
222
+
223
+
1) Rename `bug-report.tar.gz` to something more descriptive like `issue-123.tar.gz`
224
+
2) Copy the tar file into `test/rpc-integration/`
225
+
3) Run `./generateDirectoryTest.sh issue-123.tar.gz`. This will copy the definition files into `resources/` and rpc commands into `test-issue-123/`
226
+
4) Run the test via `./runDirectoryTest test-issue-123`
227
+
228
+
Note that it is also possible to run a test directly from a tarball with `runDirectoryTest.sh`, skipping the unpacking step. This is useful in automated workflows that involve running several tarballs.
229
+
230
+
### Pretty printing KORE JSON
231
+
232
+
There is a simple utility called pretty which can be used to pretty print a KORE JSON term from a bug report, which does not contain the original K definition:
233
+
234
+
```
235
+
cabal run pretty -- ../definition.kore <(jq '.result.state' XXX_response.json)
Copy file name to clipboardexpand all lines: booster/README.md
-117
Original file line number
Diff line number
Diff line change
@@ -5,120 +5,3 @@ A simpler and faster version of [K-Framework's haskell-backend](../haskell-backe
5
5
* A simple rewrite engine that focuses on the K configuration as the main term
6
6
* Aims to solve _easy and common_ rewrites quickly, rather than _all_ rewrites completely
7
7
* Reverts to the standard backend for complex unification and term simplification
8
-
9
-
## Kompiling a K definition and running the RPC server
10
-
11
-
The `kore-rpc-booster` binary takes a `kore` file definition, parses and internalises it and then launches an RPC server, which executes requests agains this definition. It additionally accepts a path to a dynamic library compiled by the LLVM backend, which is used for simplification of bool sorted terms. In order to build the kore definition and the shared library out of a K definition, first call
At the moment, all code lives in a single package. This might change in the future as the software grows in terms of features.
29
-
30
-
### Building
31
-
32
-
The software can be built with `stack` or `cabal`.
33
-
34
-
```sh
35
-
$ stack build
36
-
# or
37
-
$ cabal build
38
-
```
39
-
40
-
* Prerequisites: `stack` or `cabal`, `ghc-9.2.7`
41
-
* Stackage resolver: `lts-20.20` (`ghc-9.2.7`)
42
-
43
-
### Nix
44
-
45
-
There are several things you can do, to make the development via nix as seamless as possible.
46
-
47
-
48
-
#### Nix shell
49
-
50
-
To open the nix shell, run `nix develop`. You will need nix version 2.4 or newer with nix flakes enabled.
51
-
52
-
53
-
#### Nix-direnv
54
-
55
-
Using a version of direnv that works with nix (https://github.com/nix-community/nix-direnv) allows seamless loading and unloading of the nix shell, which adds all the required packages such as `cabal`, `hpack`, `fourmolu`, etc. Use the above link to install `nix-direnv`, making sure to hook direnv into whichever shell you are using (https://direnv.net/docs/hook.html). You can use the default nix shell (currently GHC version 9.2.7) by running
56
-
57
-
```bash
58
-
echo"use flake ."> .envrc
59
-
```
60
-
61
-
Finally, run `direnv allow` inside the repo folder to load up the nix shell.
62
-
63
-
Note that only `cabal` currently works within the nix shell and since it does not support the HPack `package.yaml` file format, any changes to this file will require running `hpack` before they are picked up by cabal.
64
-
65
-
### scripts/update-haskell-backend.sh
66
-
67
-
To bump the version of the haskell-backend consistently within the project (i.e. in nix, cabal.project and stack.yaml) call
68
-
69
-
```
70
-
nix run .#update-haskell-backend
71
-
```
72
-
73
-
you can optionally pass a commit hash above if you don't want master.
74
-
75
-
### scripts/performance-tests-{kevm, kontrol}.sh
76
-
77
-
Call these scripts from the root of the repo to obtain performance numbers for the KEVM and Kontrol test suites. These are necessary for any new feature which is expected to modify the perfromance of the booster and the timings should be includedf in the PR.
78
-
79
-
80
-
### scripts/booster-analysis.sh
81
-
82
-
This scipt can be used with any folder containing bug reports to build an anlysis of fallback/abort reasons in the booster. To obtain bug reports, first run `PYTEST_PARALLEL=8 scripts/performance-tests-kevm.sh --bug-report`, which will generate tarballs for all the tests and drop them into `scripts/bug-reports/`. Then call `scripts/booster-analysis.sh scripts/booster-analysis.sh scripts/bug-reports/kevm-v1.0.417-main`
83
-
84
-
### HLS in VSCode
85
-
86
-
To get HLS working in VSCode, install these two extensions:
The `nix-env-selector` extension may prompt for the workspace to be re-loaded. Once re-loaded, HLS should start working.
92
-
93
-
## Eventlog tracing
94
-
95
-
Besides compiling the backend with profiling mode, we can also enable a targeted profiling mode by emitting useful debug events into the eventlog. So far we can emit/trace the following:
96
-
97
-
* LLVM backend API calls - using `--trace llvm-calls +RTS -l-au` we can collect all the LLVM backend API calls the server performs during execution. Running the obtained eventlog through `eventlog-parser` will produce an `llvm_calls.c` file of the form:
* Timing information in IO - using `--trace timing +RTS -lsu` we can instrument code with `Trace.timeIO "foo" ...` calls which will measure time spent in `...` and attach the label `foo` to this region in the speedscope profile. `eventlog-parser` will produce a JSON file of these calls viewable in the speedscope app.
110
-
111
-
## Generating an integration test from a bug-report.tar.gz
112
-
113
-
1) Rename `bug-report.tar.gz` to something more descriptive like `issue-123.tar.gz`
114
-
2) Copy the tar file into `test/rpc-integration/`
115
-
3) Run `./generateDirectoryTest.sh issue-123.tar.gz`. This will copy the definition files into `resources/` and rpc commands into `test-issue-123/`
116
-
4) Run the test via `./runDirectoryTest test-issue-123`
117
-
118
-
## Pretty printing KORE JSON
119
-
120
-
There is a simple utility called pretty which can be used to pretty print a KORE JSON term from a bug report, which does not contain the original K definition:
121
-
122
-
```
123
-
cabal run pretty -- ../definition.kore <(jq '.result.state' XXX_response.json)
0 commit comments