This commit is contained in:
Kai Renken 2023-10-01 23:28:16 +02:00
commit c6da80c598
230 changed files with 16696 additions and 4996 deletions

View File

@ -1,5 +1,5 @@
[bumpversion] [bumpversion]
current_version = 0.2.1 current_version = 0.2.3
[bumpversion:file:veilid-server/Cargo.toml] [bumpversion:file:veilid-server/Cargo.toml]
search = name = "veilid-server" search = name = "veilid-server"

View File

@ -1,8 +1,4 @@
.vscode .vscode
.git .git
external/keyring-manager/android_test/.gradle
external/keyring-manager/android_test/app/build
external/keyring-manager/android_test/build
external/keyring-manager/android_test/local.properties
target target
veilid-core/pkg veilid-core/pkg

89
BOOTSTRAP-SETUP.md Executable file
View File

@ -0,0 +1,89 @@
# Starting a Generic/Public Veilid Bootstrap Server
## Instance Recommended Setup
CPU: Single
RAM: 1GB
Storage: 25GB
IP: Static v4 & v6
Firewall: 5150/TCP/UDP inbound allow all
## Install Veilid
Follow instructions in [INSTALL.md](./INSTALL.md)
## Configure Veilid as Bootstrap
### Stop the Veilid service
```shell
sudo systemctl stop veilid-server.service
```
### Setup the config
In _/etc/veilid-server/veilid-server.conf`_ ensure _bootstrap: ['bootstrap.<your.domain>']_ in the _routing_table:_ section
If you came here from the [dev network setup](./dev-setup/dev-network-setup.md) guide, this is when you set the network key.
**Switch to veilid user**
```shell
sudo -u veilid /bin/bash
```
### Generate a new keypair
Copy the output to secure storage such as a password manager. This information will be used in the next step and can be used for node recovery, moving to a different server, etc.
```shell
veilid-server --generate-key-pair
```
### Create new node ID and flush existing route table
Include the brackets [] when pasting the keys. Use the public key in the command. Secret key will be requested interactively and will not echo when pasted.
```shell
veilid-server --set-node-id [PUBLIC_KEY] --delete-table-store
```
### Generate the DNS TXT record
Copy the output to secure storage. This information will be use to setup DNS records.
```shell
veilid-server --dump-txt-record
```
### Start the Veilid service
Disconnect from the Veilid user and start veilid-server.service.
```shell
exit
```
```shell
sudo systemctl start veilid-server.service
```
Optionally configure the service to start at boot `sudo systemctl enable veilid-server.service`
_REPEAT FOR EACH BOOTSTRAP SERVER_
## Enter DNS Records
Create the following DNS Records for your domain:
(This example assumes two bootstrap serves are being created)
| Record | Value | Record Type |
|-----------|-----------------------------|-------------|
|bootstrap | 1,2 | TXT |
|1.bootstrap| IPv4 | A |
|1.bootstrap| IPv6 | AAAA |
|1.bootstrap| output of --dump-txt-record | TXT |
|2.bootstrap| IPv4 | A |
|2.bootstrap| IPv6 | AAAA |
|2.bootstrap| output of --dump-txt-record | TXT |

View File

@ -1,3 +1,19 @@
**Changed in Veilid 0.2.3**
- Security fix for WS denial of service
- Support for latest Rust 1.72
**Changed in Veilid 0.2.2**
- Capnproto 1.0.1 + Protobuf 24.3
- DHT set/get correctness fixes
- Connection table fixes
- Node resolution fixes
- More debugging commands (appmessage, appcall, resolve, better nodeinfo, etc)
- Reverse connect for WASM nodes
- Better Typescript types for WASM
- Various script and environment cleanups
- Earthly build for aarch64 RPM
- Much improved and faster public address detection
**Changes in Veilid 0.2.1** **Changes in Veilid 0.2.1**
- Crates are separated and publishable - Crates are separated and publishable
- First publication of veilid-core with docs to crates.io and docs.rs - First publication of veilid-core with docs to crates.io and docs.rs

View File

@ -1,73 +1,71 @@
# Contributing to Veilid # Contributing to Veilid
Before you get started, please review our [Code of Conduct](./code_of_conduct.md). We're here to make things better and we cannot do that without treating each other with respect. Before you get started, please review our [Code of Conduct](./code_of_conduct.md). We're here to make things better and we cannot do that without treating each other with respect.
## Code Contributions ## Code Contributions
To begin crafting code to contribute to the Veilid project, first set up a [development environment](./DEVELOPMENT.md). [Fork] and clone the project into your workspace; check out a new local branch and name it in a way that describes the work being done. This is referred to as a [feature branch]. To begin crafting code to contribute to the Veilid project, first set up a [development environment](./DEVELOPMENT.md). [Fork] and clone the project into your workspace; check out a new local branch and name it in a way that describes the work being done. This is referred to as a [feature branch].
Some contributions might introduce changes that are incompatible with other existing nodes. In this case it is recommended to also set a development network *Guide Coming Soon*. Some contributions might introduce changes that are incompatible with other existing nodes. In this case it is recommended to also setup a [development network](./dev-setup/dev-network-setup.md).
Once you have added your new function or addressed a bug, test it locally to ensure it's working as expected. If needed, test your work in a development network with more than one node based on your code. Once you're satisfied your code works as intended and does not introduce negative results or new bugs, follow the merge requests section below to submit your work for maintainer review. Once you have added your new function or addressed a bug, test it locally to ensure it's working as expected. If needed, test your work in a development network with more than one node based on your code. Once you're satisfied your code works as intended and does not introduce negative results or new bugs, follow the merge requests section below to submit your work for maintainer review.
We try to consider all merge requests fairly and with attention deserving to those willing to put in time and effort, but if you do not follow these rules, your contribution will be closed. We strive to ensure that the code joining the main branch is written to a high standard. We try to consider all merge requests fairly and with attention deserving to those willing to put in time and effort, but if you do not follow these rules, your contribution will be closed. We strive to ensure that the code joining the main branch is written to a high standard.
### Code Contribution Do's & Don'ts
### Code Contribution Do's & Don'ts:
Keeping the following in mind gives your contribution the best chance of landing! Keeping the following in mind gives your contribution the best chance of landing!
#### <u>Merge Requests</u> #### Merge Requests
* **Do** start by [forking] the project.
* **Do** create a [feature branch] to work on instead of working directly on `main`. This helps to:
* Protect the process.
* Ensures users are aware of commits on the branch being considered for merge.
* Allows for a location for more commits to be offered without mingling with other contributor changes.
* Allows contributors to make progress while a MR is still being reviewed.
* **Do** follow the [50/72 rule] for Git commit messages.
* **Do** target your merge request to the **main branch**.
* **Do** specify a descriptive title to make searching for your merge request easier.
* **Do** list [verification steps] so your code is testable.
* **Do** reference associated issues in your merge request description.
* **Don't** leave your merge request description blank.
* **Don't** abandon your merge request. Being responsive helps us land your code faster.
* **Don't** submit unfinished code.
- **Do** start by [forking] the project.
- **Do** create a [feature branch] to work on instead of working directly on `main`. This helps to:
- Protect the process.
- Ensures users are aware of commits on the branch being considered for merge.
- Allows for a location for more commits to be offered without mingling with other contributor changes.
- Allows contributors to make progress while a MR is still being reviewed.
- **Do** follow the [50/72 rule] for Git commit messages.
- **Do** target your merge request to the **main branch**.
- **Do** specify a descriptive title to make searching for your merge request easier.
- **Do** list [verification steps] so your code is testable.
- **Do** reference associated issues in your merge request description.
- **Don't** leave your merge request description blank.
- **Don't** abandon your merge request. Being responsive helps us land your code faster.
- **Don't** submit unfinished code.
## Contributions Without Writing Code ## Contributions Without Writing Code
There are numerous ways you can contribute to the growth and success of the Veilid project without writing code: There are numerous ways you can contribute to the growth and success of the Veilid project without writing code:
- If you want to submit merge requests, begin by [forking] the project and checking out a new local branch. Name your new branch in a way that describes the work being done. This is referred to as a [feature branch]. - If you want to submit merge requests, begin by [forking] the project and checking out a new local branch. Name your new branch in a way that describes the work being done. This is referred to as a [feature branch].
- Submit bugs as well as feature/enhancement requests. Letting us know you found a bug, have an idea for a new feature, or see a way we can enhance existing features is just as important and useful as writing the code related to those things. Send us detailed information about your issue or idea: - Submit bugs as well as feature/enhancement requests. Letting us know you found a bug, have an idea for a new feature, or see a way we can enhance existing features is just as important and useful as writing the code related to those things. Send us detailed information about your issue or idea:
- Features/Enhancements: Describe your idea. If you're able to, sketch out a diagram or mock-up. - Features/Enhancements: Describe your idea. If you're able to, sketch out a diagram or mock-up.
- Bugs: Please be sure to include the expected behavior, the observed behavior, and steps to reproduce the problem. Please be descriptive about the environment you've installed your node or application into. - Bugs: Please be sure to include the expected behavior, the observed behavior, and steps to reproduce the problem. Please be descriptive about the environment you've installed your node or application into.
- [Help other users with open issues]. Sometimes all an issue needs is a little conversation to clear up a process or misunderstanding. Please keep the [Code of Conduct](./code_of_conduct.md) in mind. - [Help other users with open issues]. Sometimes all an issue needs is a little conversation to clear up a process or misunderstanding. Please keep the [Code of Conduct](./code_of_conduct.md) in mind.
- Help other contributors test recently submitted merge requests. By pulling down a merge request and testing it, you can help validate new code contributions for stability and quality. - Help other contributors test recently submitted merge requests. By pulling down a merge request and testing it, you can help validate new code contributions for stability and quality.
- Report a security or privacy vulnerability. Please let us know if you find ways in which Veilid could handle security and/or privacy in a different or better way. Surely let us know if you find broken or otherwise flawed security and/or privacy functions. You can report these directly to security@veilid.org. - Report a security or privacy vulnerability. Please let us know if you find ways in which Veilid could handle security and/or privacy in a different or better way. Surely let us know if you find broken or otherwise flawed security and/or privacy functions. You can report these directly to <security@veilid.org>.
- Add or edit documentation. Documentation is a living and evolving library of knowledge. As such, care, feeding, and even pruning is needed from time to time. If you're a non-native english speaker, you can help by replacing any ambiguous idioms, metaphors, or unclear language that might make our documentation hard to understand. - Add or edit documentation. Documentation is a living and evolving library of knowledge. As such, care, feeding, and even pruning is needed from time to time. If you're a non-native english speaker, you can help by replacing any ambiguous idioms, metaphors, or unclear language that might make our documentation hard to understand.
### Bug Fixes
#### <u>Bug Fixes</u> - **Do** include reproduction steps in the form of [verification steps].
* **Do** include reproduction steps in the form of [verification steps]. - **Do** link to any corresponding issues in your commit description.
* **Do** link to any corresponding issues in your commit description.
## Bug Reports ## Bug Reports
When reporting Veilid issues: When reporting Veilid issues:
* **Do** write a detailed description of your bug and use a descriptive title.
* **Do** include reproduction steps, stack traces, and anything that might help us fix your bug.
* **Don't** file duplicate reports. Search open issues for similar bugs before filing a new report.
* **Don't** attempt to report issues on a closed PR. New issues should be openned against the `main` branch.
Please report vulnerabilities in Veilid directly to security@veilid.org. - **Do** write a detailed description of your bug and use a descriptive title.
- **Do** include reproduction steps, stack traces, and anything that might help us fix your bug.
- **Don't** file duplicate reports. Search open issues for similar bugs before filing a new report.
- **Don't** attempt to report issues on a closed PR. New issues should be openned against the `main` branch.
Please report vulnerabilities in Veilid directly to <security@veilid.org>.
If you're looking for more guidance, talk to other Veilid contributors on the [Veilid Discord]. If you're looking for more guidance, talk to other Veilid contributors on the [Veilid Discord].
**Thank you** for taking the few moments to read this far! Together we will build something truely remarkable. **Thank you** for taking the few moments to read this far! Together we will build something truely remarkable.
This contributor guide is inspired by the contribution guidelines of the [Metasploit Framework](https://github.com/rapid7/metasploit-framework/blob/master/CONTRIBUTING.md) project found on GitHub. This contributor guide is inspired by the contribution guidelines of the [Metasploit Framework](https://github.com/rapid7/metasploit-framework/blob/master/CONTRIBUTING.md) project found on GitHub.
[Help other users with open issues]:https://gitlab.com/veilid/veilid/-/issues [Help other users with open issues]:https://gitlab.com/veilid/veilid/-/issues

1046
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -7,12 +7,18 @@ members = [
"veilid-flutter/rust", "veilid-flutter/rust",
"veilid-wasm", "veilid-wasm",
] ]
exclude = ["./external"] resolver = "2"
[patch.crates-io] [patch.crates-io]
cursive = { git = "https://gitlab.com/veilid/cursive.git" } cursive = { git = "https://gitlab.com/veilid/cursive.git" }
cursive_core = { git = "https://gitlab.com/veilid/cursive.git" } cursive_core = { git = "https://gitlab.com/veilid/cursive.git" }
# For local development
# keyvaluedb = { path = "../keyvaluedb/keyvaluedb" }
# keyvaluedb-memorydb = { path = "../keyvaluedb/keyvaluedb-memorydb" }
# keyvaluedb-sqlite = { path = "../keyvaluedb/keyvaluedb-sqlite" }
# keyvaluedb-web = { path = "../keyvaluedb/keyvaluedb-web" }
[profile.release] [profile.release]
opt-level = "s" opt-level = "s"
lto = true lto = true

View File

@ -1,8 +1,9 @@
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](code_of_conduct.md)
# Veilid Development # Veilid Development
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg)](code_of_conduct.md)
## Introduction ## Introduction
This guide covers setting up environments for core, Flutter/Dart, and Python development. See the relevent sections. This guide covers setting up environments for core, Flutter/Dart, and Python development. See the relevent sections.
## Obtaining the source code ## Obtaining the source code
@ -20,14 +21,15 @@ itself, Ubuntu or Mint. Pull requests to support other distributions would be
welcome! welcome!
Running the setup script requires: Running the setup script requires:
* Android SDK and NDK * Android SDK and NDK
* Rust * Rust
You may decide to use Android Studio [here](https://developer.android.com/studio) You may decide to use Android Studio [here](https://developer.android.com/studio)
to maintain your Android dependencies. If so, use the dependency manager to maintain your Android dependencies. If so, use the dependency manager
within your IDE. If you plan on using Flutter for Veilid development, the Android Studio within your IDE. If you plan on using Flutter for Veilid development, the Android Studio
method is highly recommended as you may run into path problems with the 'flutter' method is highly recommended as you may run into path problems with the 'flutter'
command line without it. If you do so, you may skip to command line without it. If you do so, you may skip to
[Run Veilid setup script](#Run Veilid setup script). [Run Veilid setup script](#Run Veilid setup script).
* build-tools;33.0.1 * build-tools;33.0.1
@ -38,7 +40,6 @@ command line without it. If you do so, you may skip to
#### Setup Dependencies using the CLI #### Setup Dependencies using the CLI
You can automatically install the prerequisites using this script: You can automatically install the prerequisites using this script:
```shell ```shell
@ -88,20 +89,21 @@ cd veilid-flutter
./setup_flutter.sh ./setup_flutter.sh
``` ```
### macOS ### macOS
Development of Veilid on MacOS is possible on both Intel and ARM hardware. Development of Veilid on MacOS is possible on both Intel and ARM hardware.
Development requires: Development requires:
* Android Studio
* Android Studio
* Xcode, preferably latest version * Xcode, preferably latest version
* Homebrew [here](https://brew.sh) * Homebrew [here](https://brew.sh)
* Android SDK and NDK * Android SDK and NDK
* Rust * Rust
You will need to use Android Studio [here](https://developer.android.com/studio) You will need to use Android Studio [here](https://developer.android.com/studio)
to maintain your Android dependencies. Use the SDK Manager in the IDE to install the following packages (use package details view to select version): to maintain your Android dependencies. Use the SDK Manager in the IDE to install the following packages (use package details view to select version):
* Android SDK Build Tools (33.0.1) * Android SDK Build Tools (33.0.1)
* NDK (Side-by-side) (25.1.8937393) * NDK (Side-by-side) (25.1.8937393)
* Cmake (3.22.1) * Cmake (3.22.1)
@ -121,7 +123,7 @@ export PATH=\$PATH:$HOME/Library/Android/sdk/platform-tools
EOF EOF
``` ```
#### Run Veilid setup script #### Run Veilid setup script (macOS)
Now you may run the MacOS setup script to check your development environment and Now you may run the MacOS setup script to check your development environment and
pull the remaining Rust dependencies: pull the remaining Rust dependencies:
@ -130,7 +132,7 @@ pull the remaining Rust dependencies:
./dev-setup/setup_macos.sh ./dev-setup/setup_macos.sh
``` ```
#### Run the veilid-flutter setup script (optional) #### Run the veilid-flutter setup script (optional) (macOS)
If you are developing Flutter applications or the flutter-veilid portion, you should If you are developing Flutter applications or the flutter-veilid portion, you should
install Android Studio, and run the flutter setup script: install Android Studio, and run the flutter setup script:
@ -144,13 +146,13 @@ cd veilid-flutter
For a simple installation allowing Rust development, follow these steps: For a simple installation allowing Rust development, follow these steps:
Install Git from https://git-scm.com/download/win Install Git from <https://git-scm.com/download/win>
Install Rust from https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe Install Rust from <https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe> (this may prompt you to run the Visual Studio Installer, and reboot, before proceeding).
Ensure that protoc.exe is in a directory in your path. For example, it can be obtained from https://github.com/protocolbuffers/protobuf/releases/download/v24.2/protoc-24.2-win64.zip Ensure that protoc.exe is in a directory in your path. For example, it can be obtained from <https://github.com/protocolbuffers/protobuf/releases/download/v24.3/protoc-24.3-win64.zip>
Ensure that capnp.exe is in a directory in your path. For example, it can be obtained from https://capnproto.org/capnproto-c++-win32-0.10.4.zip Ensure that capnp.exe (for Capn Proto 1.0.1) is in a directory in your path. For example, it can be obtained from the <https://capnproto.org/capnproto-c++-win32-1.0.1.zip> distribution. Please note that the Windows Package Manager Community Repository (i.e., winget) as of 2023-09-15 has version 0.10.4, which is not sufficient.
Start a Command Prompt window. Start a Command Prompt window.

View File

@ -2,7 +2,7 @@ VERSION 0.6
# Start with older Ubuntu to ensure GLIBC symbol versioning support for older linux # Start with older Ubuntu to ensure GLIBC symbol versioning support for older linux
# Ensure we are using an amd64 platform because some of these targets use cross-platform tooling # Ensure we are using an amd64 platform because some of these targets use cross-platform tooling
FROM ubuntu:16.04 FROM ubuntu:18.04
# Install build prerequisites # Install build prerequisites
deps-base: deps-base:
@ -45,9 +45,6 @@ deps-rust:
# Install Linux cross-platform tooling # Install Linux cross-platform tooling
deps-cross: deps-cross:
FROM +deps-rust FROM +deps-rust
# TODO: gcc-aarch64-linux-gnu is not in the packages for ubuntu 16.04
# RUN apt-get install -y gcc-aarch64-linux-gnu curl unzip
# RUN apt-get install -y gcc-4.8-arm64-cross
RUN curl https://ziglang.org/builds/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e.tar.xz | tar -C /usr/local -xJf - RUN curl https://ziglang.org/builds/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e.tar.xz | tar -C /usr/local -xJf -
RUN mv /usr/local/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e /usr/local/zig RUN mv /usr/local/zig-linux-x86_64-0.11.0-dev.3978+711b4e93e /usr/local/zig
ENV PATH=$PATH:/usr/local/zig ENV PATH=$PATH:/usr/local/zig
@ -216,6 +213,27 @@ package-linux-arm64-deb:
# save artifacts # save artifacts
SAVE ARTIFACT --keep-ts /dpkg/out/*.deb AS LOCAL ./target/packages/ SAVE ARTIFACT --keep-ts /dpkg/out/*.deb AS LOCAL ./target/packages/
package-linux-arm64-rpm:
FROM --platform arm64 rockylinux:8
RUN yum install -y createrepo rpm-build rpm-sign yum-utils rpmdevtools
RUN rpmdev-setuptree
#################################
### RPMBUILD .RPM FILES
#################################
RUN mkdir -p /veilid/target
COPY --dir .cargo files scripts veilid-cli veilid-core veilid-server veilid-tools veilid-flutter veilid-wasm Cargo.lock Cargo.toml package /veilid
COPY +build-linux-arm64/aarch64-unknown-linux-gnu /veilid/target/aarch64-unknown-linux-gnu
RUN mkdir -p /rpm-work-dir/veilid-server
# veilid-server
RUN veilid/package/rpm/veilid-server/earthly_make_veilid_server_rpm.sh aarch64 aarch64-unknown-linux-gnu
#SAVE ARTIFACT --keep-ts /root/rpmbuild/RPMS/aarch64/*.rpm AS LOCAL ./target/packages/
# veilid-cli
RUN veilid/package/rpm/veilid-cli/earthly_make_veilid_cli_rpm.sh aarch64 aarch64-unknown-linux-gnu
# save artifacts
SAVE ARTIFACT --keep-ts /root/rpmbuild/RPMS/aarch64/*.rpm AS LOCAL ./target/packages/
package-linux-amd64: package-linux-amd64:
BUILD +package-linux-amd64-deb BUILD +package-linux-amd64-deb
BUILD +package-linux-amd64-rpm BUILD +package-linux-amd64-rpm

View File

@ -1,61 +1,99 @@
# Install a Veilid Node # Install and run a Veilid Node
## Server Grade Headless Nodes ## Server Grade Headless Nodes
These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and uptime availability is crucial for building the fast, secure, and private routing that Veilid is built to provide. These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and uptime availability is crucial for building the fast, secure, and private routing that Veilid is built to provide.
## Install
### Add the repo to a Debian based system and install a Veilid node ### Debian
This is a multi-step process.
Follow the steps here to add the repo to a Debian based system and install Veilid.
**Step 1**: Add the GPG keys to your operating systems keyring.<br /> **Step 1**: Add the GPG keys to your operating systems keyring.<br />
*Explanation*: The `wget` command downloads the public key, and the `sudo gpg` command adds the public key to the keyring. *Explanation*: The `wget` command downloads the public key, and the `sudo gpg` command adds the public key to the keyring.
```shell ```shell
wget -O- https://packages.veilid.net/gpg/veilid-packages-key.public | sudo gpg --dearmor -o /usr/share/keyrings/veilid-packages-keyring.gpg wget -O- https://packages.veilid.net/gpg/veilid-packages-key.public | sudo gpg --dearmor -o /usr/share/keyrings/veilid-packages-keyring.gpg
``` ```
**Step 2**: Identify your architecture<br /> **Step 2**: Identify your architecture<br />
*Explanation*: The following command will tell you what type of CPU your system is running *Explanation*: The following command will tell you what type of CPU your system is running
```shell ```shell
dpkg --print-architecture dpkg --print-architecture
``` ```
**Step 3**: Add Veilid to your list of available software.<br /> **Step 3**: Add Veilid to your list of available software.<br />
*Explanation*: Using the command in **Step 2** you will need to run **one** of the following: *Explanation*: Use the result of your command in **Step 2** and run **one** of the following:
- For **AMD64** based systems run this command: - For **AMD64** based systems run this command:
```shell
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null ```shell
``` echo "deb [arch=amd64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
```
- For **ARM64** based systems run this command: - For **ARM64** based systems run this command:
```shell
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null ```shell
``` echo "deb [arch=arm64 signed-by=/usr/share/keyrings/veilid-packages-keyring.gpg] https://packages.veilid.net/apt stable main" | sudo tee /etc/apt/sources.list.d/veilid.list 1>/dev/null
```
*Explanation*: *Explanation*:
Each of the above commands will create a new file called `veilid.list` in the `/etc/apt/sources.list.d/`. This file contains instructions that tell the operating system where to download Veilid. Each of the above commands will create a new file called `veilid.list` in the `/etc/apt/sources.list.d/`. This file contains instructions that tell the operating system where to download Veilid.
**Step 4**: Refresh the package manager.<br /> **Step 4**: Refresh the package manager.<br />
*Explanation*: This tells the `apt` package manager to rebuild the list of available software using the files in `/etc/apt/sources.list.d/` directory. This is invoked with "sudo" to grant superuser permission to make the changes. *Explanation*: This tells the `apt` package manager to rebuild the list of available software using the files in `/etc/apt/sources.list.d/` directory.
```shell ```shell
sudo apt update sudo apt update
``` ```
**Step 5**: Install Veilid.<br /> **Step 5**: Install Veilid.
*Explanation*: With the package manager updated, it is now possible to install Veilid! This is invoked with "sudo" to grant superuser permission to make the changes.
```shell ```shell
sudo apt install veilid-server veilid-cli sudo apt install veilid-server veilid-cli
``` ```
### Add the repo to a Fedora based system and install a Veilid node ### RPM-based
**Step 1**: Add Veilid to your list of available software.<br />
*Explanation*: With the package manager updated, it is now possible to install Veilid! Follow the steps here to add the repo to
RPM-based systems (CentOS, Rocky Linux, AlmaLinux, Fedora, etc.)
and install Veilid.
**Step 1**: Add Veilid to your list of available software.
```shell ```shell
yum-config-manager --add-repo https://packages.veilid.net/rpm/veilid-rpm-repo.repo sudo yum-config-manager --add-repo https://packages.veilid.net/rpm/veilid-rpm-repo.repo
``` ```
**Step 2**: Install Veilid.<br /> **Step 2**: Install Veilid.
*Explanation*: With the package manager updated, it is now possible to install Veilid!
```shell ```shell
dnf install veilid-server veilid-cli sudo dnf install veilid-server veilid-cli
```
## Start headless node
### With systemd
To start a headless Veilid node, run:
```shell
sudo systemctl start veilid-server.service
```
To have your headless Veilid node start at boot:
```shell
sudo systemctl enable --now veilid-server.service
```
### Without systemd
`veilid-server` must be run as the `veilid` user.
To start your headless Veilid node without systemd, run:
```shell
sudo -u veilid veilid-server
``` ```

View File

@ -1,8 +1,8 @@
# Willkommen bei Veilid # Willkommen bei Veilid
- [Aus der Umlaufbahn](#aus-der-umlaufbahn) - [Aus der Umlaufbahn](#aus-der-umlaufbahn)
- [Betreibe einen Node](#betreibe-einen-node) - [Betreibe einen Node](#betreibe-einen-node)
- [Entwicklung](#entwicklung) - [Entwicklung](#entwicklung)
## Aus der Umlaufbahn ## Aus der Umlaufbahn
@ -13,17 +13,19 @@ Veilid wurde mit der Idee im Hinterkopf entworfen, dass jeder Benutzer seine eig
Der primäre Zweck des Veilid Netzwerks ist es eine Infrastruktur für eine besondere Art von geteilten Daten zur Verfügung zu stellen: Soziale Medien in verschiedensten Arten. Dies umfasst sowohl leichtgewichtige Inhalte wie Twitters/Xs Tweets oder Mastodons Toots, mittleschwere Inhalte wie Bilder oder Musik aber auch schwergewichtige Inhalte wie Videos. Es ist eben so beabsichtigt Meta-Inhalte (wie persönliche Feeds, Antworten, private Nachrichten und so weiter) auf Basis von Veilid laufen zu lassen. Der primäre Zweck des Veilid Netzwerks ist es eine Infrastruktur für eine besondere Art von geteilten Daten zur Verfügung zu stellen: Soziale Medien in verschiedensten Arten. Dies umfasst sowohl leichtgewichtige Inhalte wie Twitters/Xs Tweets oder Mastodons Toots, mittleschwere Inhalte wie Bilder oder Musik aber auch schwergewichtige Inhalte wie Videos. Es ist eben so beabsichtigt Meta-Inhalte (wie persönliche Feeds, Antworten, private Nachrichten und so weiter) auf Basis von Veilid laufen zu lassen.
## Betreibe einen Node ## Betreibe einen Node
Der einfachste Weg dem Veilid Netzwerk beim Wachsen zu helfen ist einen eigenen Node zu betreiben. Jeder Benutzer von Veilid ist automatisch ein Node, aber einige Nodes helfen dem Netzwerk mehr als Andere. Diese Nodes, die das Netzwerk unterstützen sind schwergewichtiger als Nodes, die Nutzer auf einem Smartphone in Form eine Chats oder eine Social Media Applikation starten würde.Droplets oder AWS EC2 mit hoher Bandbreite, Verarbeitungsressourcen und Verfügbarkeit sind wesentlich um das schnellere, sichere und private Routing zu bauen, das Veilid zur Verfügung stellen soll. Der einfachste Weg dem Veilid Netzwerk beim Wachsen zu helfen ist einen eigenen Node zu betreiben. Jeder Benutzer von Veilid ist automatisch ein Node, aber einige Nodes helfen dem Netzwerk mehr als Andere. Diese Nodes, die das Netzwerk unterstützen sind schwergewichtiger als Nodes, die Nutzer auf einem Smartphone in Form eine Chats oder eine Social Media Applikation starten würde.Droplets oder AWS EC2 mit hoher Bandbreite, Verarbeitungsressourcen und Verfügbarkeit sind wesentlich um das schnellere, sichere und private Routing zu bauen, das Veilid zur Verfügung stellen soll.
Um einen solchen Node zu betreiben, setze einen Debian- oder Fedora-basierten Server auf und installiere den veilid-server Service. Um dies besonders einfach zu machen, stellen wir Paketmanager Repositories als .deb und .rpm Pakete bereit. Für weitergehendene Information schaue in den [Installation](./INSTALL.md) Leitfaden. Um einen solchen Node zu betreiben, setze einen Debian- oder Fedora-basierten Server auf und installiere den veilid-server Service. Um dies besonders einfach zu machen, stellen wir Paketmanager Repositories als .deb und .rpm Pakete bereit. Für weitergehendene Information schaue in den [Installation](./INSTALL.md) Leitfaden.
## Entwicklung ## Entwicklung
Falls Du Lust hast, dich an der Entwicklung von Code oder auch auf andere Weise zu beteiligen, schau bitte in den [Mitmachen](./CONTRIBUTING.md) Leitfaden. Wir sind bestrebt dieses Projekt offen zu entwickeln und zwar von Menschen für Menschen. Spezifische Bereiche in denen wir nach Hilfe suchen sind: Falls Du Lust hast, dich an der Entwicklung von Code oder auch auf andere Weise zu beteiligen, schau bitte in den [Mitmachen](./CONTRIBUTING.md) Leitfaden. Wir sind bestrebt dieses Projekt offen zu entwickeln und zwar von Menschen für Menschen. Spezifische Bereiche in denen wir nach Hilfe suchen sind:
* Rust - Rust
* Flutter/Dart - Flutter/Dart
* Python - Python
* Gitlab DevOps und CI/CD - Gitlab DevOps und CI/CD
* Dokumentation - Dokumentation
* Sicherheitsprüfungen - Sicherheitsprüfungen
* Linux Pakete - Linux Pakete

View File

@ -1,8 +1,8 @@
# Welcome to Veilid # Welcome to Veilid
- [From Orbit](#from-orbit) - [From Orbit](#from-orbit)
- [Run a Node](#run-a-node) - [Run a Node](#run-a-node)
- [Development](#development) - [Development](#development)
## From Orbit ## From Orbit
@ -13,17 +13,19 @@ Veilid is designed with a social dimension in mind, so that each user can have t
The primary purpose of the Veilid network is to provide the infrastructure for a specific kind of shared data: social media in various forms. That includes light-weight content such as Twitter's tweets or Mastodon's toots, medium-weight content like images and songs, and heavy-weight content like videos. Meta-content such as personal feeds, replies, private messages, and so forth are also intended to run atop Veilid. The primary purpose of the Veilid network is to provide the infrastructure for a specific kind of shared data: social media in various forms. That includes light-weight content such as Twitter's tweets or Mastodon's toots, medium-weight content like images and songs, and heavy-weight content like videos. Meta-content such as personal feeds, replies, private messages, and so forth are also intended to run atop Veilid.
## Run a Node ## Run a Node
The easiest way to help grow the Veilid network is to run your own node. Every user of Veilid is a node, but some nodes help the network more than others. These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and up time availability is crucial for building the fast, secure, and private routing that Veilid is built to provide. The easiest way to help grow the Veilid network is to run your own node. Every user of Veilid is a node, but some nodes help the network more than others. These network support nodes are heavier than the node a user would establish on their phone in the form of a chat or social media application. A cloud based virtual private server (VPS), such as Digital Ocean Droplets or AWS EC2, with high bandwidth, processing resources, and up time availability is crucial for building the fast, secure, and private routing that Veilid is built to provide.
To run such a node, establish a Debian or Fedora based VPS and install the veilid-server service. To make this process simple we are hosting package manager repositories for .deb and .rpm packages. See the [installing](./INSTALL.md) guide for more information. To run such a node, establish a Debian or Fedora based VPS and install the veilid-server service. To make this process simple we are hosting package manager repositories for .deb and .rpm packages. See the [installing](./INSTALL.md) guide for more information.
## Development ## Development
If you're inclined to get involved in code and non-code development, please check out the [contributing](./CONTRIBUTING.md) guide. We're striving for this project to be developed in the open and by people for people. Specific areas in which we are looking for help include: If you're inclined to get involved in code and non-code development, please check out the [contributing](./CONTRIBUTING.md) guide. We're striving for this project to be developed in the open and by people for people. Specific areas in which we are looking for help include:
* Rust - Rust
* Flutter/Dart - Flutter/Dart
* Python - Python
* Gitlab DevOps and CI/CD - Gitlab DevOps and CI/CD
* Documentation - Documentation
* Security reviews - Security reviews
* Linux packaging - Linux packaging

View File

@ -9,19 +9,19 @@ This guide outlines the process for releasing a new version of Veilid. The end r
Releases happen via a CI/CD pipeline. The release process flows as follows: Releases happen via a CI/CD pipeline. The release process flows as follows:
1. Complete outstanding merge requests (MR): 1. Complete outstanding merge requests (MR):
1.1 Evaluate the MR's adherence to the published requirements and if automatic tests passed. 1.1 Evaluate the MR's adherence to the published requirements and if automatic tests passed.
1.2 (Optional) Perform the merge in a local dev environment if testing is required beyond the standard Earthly tests. 1.2 (Optional) Perform the merge in a local dev environment if testing is required beyond the standard Earthly tests.
1.3 If everything checks out, MR meets the published requirements, and tests passed, execute the merge functions in the Gitlab UI. 1.3 If everything checks out, MR meets the published requirements, and tests passed, execute the merge functions in the Gitlab UI.
2. Maintainer performs version bump: 2. Maintainer performs version bump:
2.1 Update your local copy of `main` to mirror the newly merged upstream `main` 2.1 Update your local copy of `main` to mirror the newly merged upstream `main`
2.2 Ensure the (CHANGELOG)[./CHANGELOG.md] is updated 2.2 Ensure the [CHANGELOG](./CHANGELOG.md) is updated
2.3 Activate your bumpversion Python venv (see bumpversion setup section for details) 2.3 Activate your bumpversion Python venv (see bumpversion setup section for details)
2.4 Execute version_bump.sh with the appropriate parameter (patch, minor, or major). This results in all version entries being updated and a matching git tag created locally. 2.4 Execute version_bump.sh with the appropriate parameter (patch, minor, or major). This results in all version entries being updated and a matching git tag created locally.
@ -31,15 +31,23 @@ Releases happen via a CI/CD pipeline. The release process flows as follows:
2.6 Git commit the changes with the following message: `Version update: v{current_version} → v{new_version}` 2.6 Git commit the changes with the following message: `Version update: v{current_version} → v{new_version}`
2.7 Create the Git tag `git tag v{new_version}` 2.7 Create the Git tag `git tag v{new_version}`
2.8 Push your local 'main' to the upstream origin 'main' `git push` 2.8 Push your local 'main' to the upstream origin 'main' `git push`
2.9 Push the new tag to the upstream origin `git push origin {tag name made in step 2.7}` i.e. `git push origin v0.1.5` 2.9 Push the new tag to the upstream origin `git push origin {tag name made in step 2.7}` i.e. `git push origin v0.1.5`
2.10 Ensure the package/release/distribute pipeline autostarted in the Gitlab UI 2.10 Ensure the package/release/distribute pipeline autostarted in the Gitlab UI
Git tags serve as a historical record of what repo versions were successfully released at which version numbers. Git tags serve as a historical record of what repo versions were successfully released at which version numbers.
## Publish to crates.io
1. Configure the crates.io credentials, if not already accomplished.
2. Execute `cargo publish -p veilid-tools --dry-run`
3. Execute `cargo publish -p veilid-tools`
4. Execute `cargo publish -p veilid-core --dry-run`
5. Execute `cargo publish -p veilid-core`
## Publish to Pypi ## Publish to Pypi
1. Change directory to veilid-python 1. Change directory to veilid-python
@ -53,14 +61,15 @@ Occasionally a release will happen that needs to be reverted. This is done manua
## Released Artifacts ## Released Artifacts
### Rust Crates: ### Rust Crates
- [x] __veilid-tools__ [**Tag**: `veilid-tools-v0.0.0`]
- [x] __veilid-tools__ [__Tag__: `veilid-tools-v0.0.0`]
> An assortment of useful components used by the other Veilid crates. > An assortment of useful components used by the other Veilid crates.
> Released to crates.io when its version number is changed in `Cargo.toml` > Released to crates.io when its version number is changed in `Cargo.toml`
- [x] __veilid-core__ [**Tag**: `veilid-core-v0.0.0`] - [x] __veilid-core__ [__Tag__: `veilid-core-v0.0.0`]
> The base rust crate for Veilid's logic > The base rust crate for Veilid's logic
> Released to crates.io when its version number is changed in `Cargo.toml` > Released to crates.io when its version number is changed in `Cargo.toml`
- [ ] __veilid-server__ - [ ] __veilid-server__
> The Veilid headless node end-user application > The Veilid headless node end-user application
> Not released to crates.io as it is an application binary that is either built by hand or installed using a package manager. > Not released to crates.io as it is an application binary that is either built by hand or installed using a package manager.
> This application does not currently support `cargo install` > This application does not currently support `cargo install`
@ -69,51 +78,63 @@ Occasionally a release will happen that needs to be reverted. This is done manua
> This application does not currently support `cargo install` > This application does not currently support `cargo install`
- [ ] __veilid-wasm__ - [ ] __veilid-wasm__
> Not released to crates.io as it is not a library that can be linked by other Rust applications > Not released to crates.io as it is not a library that can be linked by other Rust applications
- [ ] __veilid-flutter__ - [ ] __veilid-flutter__
> The Dart-FFI native interface to the Veilid API > The Dart-FFI native interface to the Veilid API
> This is currently built by the Flutter plugin `veilid-flutter` and not released. > This is currently built by the Flutter plugin `veilid-flutter` and not released.
### Python Packages: ### Python Packages
- [x] __veilid-python__ [**Tag**: `veilid-python-v0.0.0`]
- [x] __veilid-python__ [__Tag__: `veilid-python-v0.0.0`]
> The Veilid API bindings for Python > The Veilid API bindings for Python
> Released to PyPi when the version number is changed in `pyproject.toml` > Released to PyPi when the version number is changed in `pyproject.toml`
### Flutter Plugins: ### Flutter Plugins
- [ ] __veilid-flutter__
- [ ] __veilid-flutter__
> The Flutter plugin for the Veilid API. > The Flutter plugin for the Veilid API.
> Because this requires a build of a native Rust crate, this is not yet released via https://pub.dev > Because this requires a build of a native Rust crate, this is not yet released via <https://pub.dev>
> TODO: Eventually the rust crate should be bound to > TODO: Eventually the rust crate should be bound to
### Operating System Packages: ### Operating System Packages
- [x] __veilid-server__ DEB package [**Tag**: `veilid-server-deb-v0.0.0`]
- [x] __veilid-server__ DEB package [__Tag__: `veilid-server-deb-v0.0.0`]
> The Veilid headless node binary in the following formats: > The Veilid headless node binary in the following formats:
> * Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository >
> * Pushed to APT repository at https://packages.veilid.net > - Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository
- [x] __veilid-server__ RPM package [**Tag**: `veilid-server-rpm-v0.0.0`] > - Pushed to APT repository at <https://packages.veilid.net>
>
- [x] __veilid-server__ RPM package [__Tag__: `veilid-server-rpm-v0.0.0`]
> The Veilid headless node binary in the following formats: > The Veilid headless node binary in the following formats:
> * Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository >
> * Pushed to Yum repository at https://packages.veilid.net > - Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository
- [x] __veilid-cli__ DEB package [**Tag**: `veilid-cli-deb-v0.0.0`] > - Pushed to Yum repository at <https://packages.veilid.net>
>
- [x] __veilid-cli__ DEB package [__Tag__: `veilid-cli-deb-v0.0.0`]
> The Veilid headless node administrator control binary in the following formats: > The Veilid headless node administrator control binary in the following formats:
> * Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository >
> * Pushed to APT repository at https://packages.veilid.net > - Standalone Debian/Ubuntu DEB file as a 'release file' on the `veilid` GitLab repository
- [x] __veilid-cli__ RPM package [**Tag**: `veilid-cli-rpm-v0.0.0`] > - Pushed to APT repository at <https://packages.veilid.net>
>
- [x] __veilid-cli__ RPM package [__Tag__: `veilid-cli-rpm-v0.0.0`]
> The Veilid headless node administrator control binary in the following formats: > The Veilid headless node administrator control binary in the following formats:
> * Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository >
> * Pushed to Yum repository at https://packages.veilid.net > - Standalone RedHat/CentOS RPM file as a 'release file' on the `veilid` GitLab repository
> - Pushed to Yum repository at <https://packages.veilid.net>
### Version Numbering: ### Version Numbering
All versions of Veilid Rust crates as well as `veilid-python` and `veilid-flutter` packages are versioned using Semver. Versions can differ per crate and package, and it is important for the Semver rules to be followed (https://semver.org/): All versions of Veilid Rust crates as well as `veilid-python` and `veilid-flutter` packages are versioned using Semver. Versions can differ per crate and package, and it is important for the Semver rules to be followed (<https://semver.org/>):
* MAJOR version when you make incompatible API changes - MAJOR version when you make incompatible API changes
* MINOR version when you add functionality in a backward compatible manner - MINOR version when you add functionality in a backward compatible manner
* PATCH version when you make backward compatible bug fixes - PATCH version when you make backward compatible bug fixes
The `version_bump.sh` script should be run on every release to stable. All of the Rust crates are versioned together and should have the same version, as well as the `veilid-python` Python package and `veilid-flutter` Flutter plugin. The `version_bump.sh` script should be run on every release to stable. All of the Rust crates are versioned together and should have the same version, as well as the `veilid-python` Python package and `veilid-flutter` Flutter plugin.
## Bumpversion Setup and Usage ## Bumpversion Setup and Usage
### Install Bumpversion ### Install Bumpversion
1. Create a Python venv for bumpversion.py. Mine is in my home dir so it persists when I update my local Veilid `main`. 1. Create a Python venv for bumpversion.py. Mine is in my home dir so it persists when I update my local Veilid `main`.
`python3 -m venv ~/bumpversion-venv` `python3 -m venv ~/bumpversion-venv`
@ -121,5 +142,6 @@ The `version_bump.sh` script should be run on every release to stable. All of th
3. Install bumpversion. `pip3 install bumpversion` 3. Install bumpversion. `pip3 install bumpversion`
### Activate venv for version bumping step of the release process ### Activate venv for version bumping step of the release process
1. Activate the venv. `source ~/bumpversion-venv/bin/activate` 1. Activate the venv. `source ~/bumpversion-venv/bin/activate`
2. Return to step 2.4 of _Create a Gitlab Release_ 2. Return to step 2.4 of _Create a Gitlab Release_

3
build_docs.bat Normal file
View File

@ -0,0 +1,3 @@
@echo off
cargo doc --no-deps -p veilid-core
cargo doc --no-deps -p veilid-tools

View File

@ -133,5 +133,6 @@ For answers to common questions about this code of conduct, see the FAQ at
[translations]: https://www.contributor-covenant.org/translations [translations]: https://www.contributor-covenant.org/translations
## Revisions ## Revisions
Veilid Foundation, Inc reserves the right to make revisions to this document Veilid Foundation, Inc reserves the right to make revisions to this document
to ensure its continued alignment with our ideals. to ensure its continued alignment with our ideals.

View File

@ -0,0 +1,42 @@
# Dev Network Setup
## Purpose
There will be times when a contibutor wishes to dynamically test their work on live nodes. Doing so on the actual Veilid network would likely not yield productive test outcomes and so setting up an independent network for testing purposes is warranted.
This document outlines the process of using the steps found in [INSTALL.md](../INSTALL.md) and [BOOTSTRAP-SETUP.md](../BOOTSTRAP-SETUP.md) with some modifications which results in a reasonably isolated and independent network of Veilid development nodes which do not communicate with nodes on the actual Veilid network.
The minimum topology of a dev network is 1 bootstrap server and 4 nodes, all with public IP addresses with port 5150/TCP open. This allows enabling public address detection and private routing. The minimum specifications are 1 vCPU, 1GB RAM, and 25 GB storage.
## Quick Start
### The Network Key
This acts as a passphase to allow nodes to join the network. It is the mechanism that makes your dev network isolated and independent. Create a passphrase and protect/store it as you would any other a password.
### Dev Bootstrap Server
Follow the steps detailed in [BOOTSTRAP-SETUP.md](../BOOTSTRAP-SETUP.md) using the dev bootstrap example [config](../doc/config/veilid-dev-bootstrap-config.md) for the *Setup the config* section. Set a _network_key_password_ in the config file.
### Dev Nodes
1. Follow the steps detailed in [INSTALL.md](../INSTALL.md) *DO NOT START THE SYSTEMD SERVICE*
2. Replace the default veilid-server config using the dev node example [config](../doc/config/veilid-dev-server-config.md) as a template. Enter your information on lines 27 and 28 to match what was entered in the dev bootstrap server's config.
3. Start the node with fresh data
```shell
sudo -u veilid veilid-server --delete-protected-store --delete-block-store --delete-table-store`
```
4. `ctrl-c` to stop the above process
5. Start the dev node service
```shell
sudo systemctl start veilid-server.service
```
6. (Optionally) configure the service to start at boot
```shell
sudo systemctl enable veilid-server.service
```

View File

@ -9,7 +9,7 @@ fi
if [ ! -z "$(command -v apt)" ]; then if [ ! -z "$(command -v apt)" ]; then
# Install APT dependencies # Install APT dependencies
sudo apt update -y sudo apt update -y
sudo apt install -y openjdk-11-jdk-headless iproute2 curl build-essential cmake libssl-dev openssl file git pkg-config libdbus-1-dev libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev checkinstall unzip llvm wabt sudo apt install -y openjdk-11-jdk-headless iproute2 curl build-essential cmake libssl-dev openssl file git pkg-config libdbus-1-dev libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev checkinstall unzip llvm wabt python3-pip
elif [ ! -z "$(command -v dnf)" ]; then elif [ ! -z "$(command -v dnf)" ]; then
# DNF (formerly yum) # DNF (formerly yum)
sudo dnf update -y sudo dnf update -y

View File

@ -1,6 +1,11 @@
#!/bin/bash #!/bin/bash
set -eo pipefail set -eo pipefail
if [ $(id -u) -eq 0 ]; then
echo "Don't run this as root"
exit
fi
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
if [[ "$(uname)" != "Linux" ]]; then if [[ "$(uname)" != "Linux" ]]; then
@ -109,7 +114,7 @@ fi
rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android wasm32-unknown-unknown rustup target add aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android wasm32-unknown-unknown
# install cargo packages # install cargo packages
cargo install wasm-bindgen-cli wasm-pack cargo install wasm-bindgen-cli wasm-pack cargo-edit
# install pip packages # install pip packages
pip3 install --upgrade bumpversion pip3 install --upgrade bumpversion

View File

@ -27,6 +27,9 @@ else
exit 1 exit 1
fi fi
# ensure Android SDK packages are installed
$ANDROID_SDK_ROOT/cmdline-tools/latest/bin/sdkmanager build-tools\;33.0.1 ndk\;25.1.8937393 cmake\;3.22.1 platform-tools platforms\;android-33
# ensure ANDROID_NDK_HOME is defined and exists # ensure ANDROID_NDK_HOME is defined and exists
if [ -d "$ANDROID_NDK_HOME" ]; then if [ -d "$ANDROID_NDK_HOME" ]; then
echo '[X] $ANDROID_NDK_HOME is defined and exists' echo '[X] $ANDROID_NDK_HOME is defined and exists'
@ -129,17 +132,11 @@ if [ "$BREW_USER" == "" ]; then
fi fi
sudo -H -u $BREW_USER brew install capnp cmake wabt llvm protobuf openjdk@17 jq sudo -H -u $BREW_USER brew install capnp cmake wabt llvm protobuf openjdk@17 jq
case $response in
[yY] ) echo Checking android sdk packages are installed...;
# Ensure android sdk packages are installed
$ANDROID_SDK_ROOT/cmdline-tools/latest/bin/sdkmanager build-tools\;33.0.1 ndk\;25.1.8937393 cmake\;3.22.1 platform-tools platforms\;android-33
esac
# install targets # install targets
rustup target add aarch64-apple-darwin aarch64-apple-ios aarch64-apple-ios-sim x86_64-apple-darwin x86_64-apple-ios wasm32-unknown-unknown aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android rustup target add aarch64-apple-darwin aarch64-apple-ios aarch64-apple-ios-sim x86_64-apple-darwin x86_64-apple-ios wasm32-unknown-unknown aarch64-linux-android armv7-linux-androideabi i686-linux-android x86_64-linux-android
# install cargo packages # install cargo packages
cargo install wasm-bindgen-cli wasm-pack cargo install wasm-bindgen-cli wasm-pack cargo-edit
# install pip packages # install pip packages
pip3 install --upgrade bumpversion pip3 install --upgrade bumpversion

View File

@ -21,8 +21,8 @@ IF NOT DEFINED PROTOC_FOUND (
FOR %%X IN (capnp.exe) DO (SET CAPNP_FOUND=%%~$PATH:X) FOR %%X IN (capnp.exe) DO (SET CAPNP_FOUND=%%~$PATH:X)
IF NOT DEFINED CAPNP_FOUND ( IF NOT DEFINED CAPNP_FOUND (
echo capnproto compiler ^(capnp^) is required but it's not installed. Install capnp 0.10.4 or higher. Ensure it is in your path. Aborting. echo capnproto compiler ^(capnp^) is required but it's not installed. Install capnp 1.0.1 or higher. Ensure it is in your path. Aborting.
echo capnp is available here: https://capnproto.org/capnproto-c++-win32-0.10.4.zip echo capnp is available here: https://capnproto.org/capnproto-c++-win32-1.0.1.zip
goto end goto end
) )

View File

@ -0,0 +1,33 @@
# Veilid Server
# =============
#
# Public Bootstrap Server Configuration
#
# -----------------------------------------------------------
---
logging:
system:
enabled: true
level: debug
api:
enabled: true
level: debug
terminal:
enabled: false
core:
capabilities:
disable: ['TUNL','SGNL','RLAY','DIAL','DHTV','APPM','ROUT']
network:
upnp: false
dht:
min_peer_count: 2
detect_address_changes: false
routing_table:
bootstrap: ['bootstrap.<your.domain>']
protected_store:
insecure_fallback_directory: '/var/db/veilid-server/protected_store'
table_store:
directory: '/var/db/veilid-server/table_store'
block_store:
directory: '/var/db/veilid-server/block_store'

View File

@ -0,0 +1,39 @@
# Veilid Server
# =============
#
# Private Development Bootstrap Server Configuration
#
# This config is templated to setup a bootstrap server with
# a network_key_password. Set the network key to whatever you
# like. Treat it like a password. Use the same network key in
# the config files for at least four nodes to establish an
# independent Veilid network for private or development uses.
# -----------------------------------------------------------
---
logging:
system:
enabled: true
level: debug
api:
enabled: true
level: debug
terminal:
enabled: false
core:
capabilities:
disable: ['TUNL','SGNL','RLAY','DIAL','DHTV','APPM']
network:
upnp: false
dht:
min_peer_count: 2
detect_address_changes: false
routing_table:
bootstrap: ['bootstrap.<your.domain>']
network_key_password: '<your-chosen-passkey>'
protected_store:
insecure_fallback_directory: '/var/db/veilid-server/protected_store'
table_store:
directory: '/var/db/veilid-server/table_store'
block_store:
directory: '/var/db/veilid-server/block_store'

View File

@ -0,0 +1,38 @@
# Veilid Server
# =============
#
# Private Development Node Configuration
#
# This config is templated to setup a Velid node with a
# network_key_password. Set the network key to whatever you
# set within your private bootstrap server's config. Treat it
# like a password.
# -----------------------------------------------------------
---
logging:
system:
enabled: true
level: debug
api:
enabled: true
level: debug
terminal:
enabled: false
core:
capabilities:
disable: ['APPM']
network:
upnp: false
dht:
min_peer_count: 10
detect_address_changes: false
routing_table:
bootstrap: ['bootstrap.<your.domain>']
network_key_password: '<your-chosen-passkey>'
protected_store:
insecure_fallback_directory: '/var/db/veilid-server/protected_store'
table_store:
directory: '/var/db/veilid-server/table_store'
block_store:
directory: '/var/db/veilid-server/block_store'

View File

@ -14,7 +14,6 @@
<div style="font-family: monospace; font-size: 3em; font-weight: bold; background-color: red; color: white; padding: 0.5em;"> <div style="font-family: monospace; font-size: 3em; font-weight: bold; background-color: red; color: white; padding: 0.5em;">
early α docs<br/> early α docs<br/>
please don't share publicly
</div> </div>
<h1>Veilid Architecture Guide</h1> <h1>Veilid Architecture Guide</h1>

View File

@ -1,7 +1,3 @@
# early α docs
# please don't share publicly
# Veilid Architecture Guide # Veilid Architecture Guide
- [From Orbit](#from-orbit) - [From Orbit](#from-orbit)

View File

@ -0,0 +1,7 @@
# When pointed at veilid-server 0.2.2 or earlier, this will cause 100% CPU utilization
import socket
s = socket.socket()
s.connect(('127.0.0.1',5150))
s.send(f"GET /ws HTTP/1.1\r\nSec-WebSocket-Version: 13\r\nConnection: Upgrade\r\nUpgrade: websocket\r\nSec-WebSocket-Key: {'A'*2000000}\r\n\r\n".encode())
s.close()

View File

@ -10,4 +10,4 @@ cp -rf /veilid/package/rpm/veilid-server/veilid-server.spec /root/rpmbuild/SPECS
/veilid/package/replace_variable.sh /root/rpmbuild/SPECS/veilid-server.spec CARGO_ARCH $CARGO_ARCH /veilid/package/replace_variable.sh /root/rpmbuild/SPECS/veilid-server.spec CARGO_ARCH $CARGO_ARCH
# build the rpm # build the rpm
rpmbuild --target "x86_64" -bb /root/rpmbuild/SPECS/veilid-server.spec rpmbuild --target "$ARCH" -bb /root/rpmbuild/SPECS/veilid-server.spec

View File

@ -1,9 +1,12 @@
#!/bin/bash #!/bin/bash
SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
CAPNPROTO_VERSION="1.0.1" # Keep in sync with veilid-core/build.rs
mkdir /tmp/capnproto-install mkdir /tmp/capnproto-install
pushd /tmp/capnproto-install pushd /tmp/capnproto-install
curl -O https://capnproto.org/capnproto-c++-0.10.4.tar.gz curl -O https://capnproto.org/capnproto-c++-${CAPNPROTO_VERSION}.tar.gz
tar zxf capnproto-c++-0.10.4.tar.gz tar zxf capnproto-c++-${CAPNPROTO_VERSION}.tar.gz
cd capnproto-c++-0.10.4 cd capnproto-c++-${CAPNPROTO_VERSION}
./configure --without-openssl ./configure --without-openssl
make -j$1 check make -j$1 check
if [ "$EUID" -ne 0 ]; then if [ "$EUID" -ne 0 ]; then

View File

@ -1,13 +1,24 @@
#!/bin/bash #!/bin/bash
VERSION=23.3 SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
PROTOC_VERSION="24.3" # Keep in sync with veilid-core/build.rs
UNAME_M=$(uname -m)
if [[ "$UNAME_M" == "x86_64" ]]; then
PROTOC_ARCH=x86_64
elif [[ "$UNAME_M" == "aarch64" ]]; then
PROTOC_ARCH=aarch_64
else
echo Unsupported build architecture
exit 1
fi
mkdir /tmp/protoc-install mkdir /tmp/protoc-install
pushd /tmp/protoc-install pushd /tmp/protoc-install
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v$VERSION/protoc-$VERSION-linux-x86_64.zip curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v$PROTOC_VERSION/protoc-$PROTOC_VERSION-linux-$PROTOC_ARCH.zip
unzip protoc-$VERSION-linux-x86_64.zip unzip protoc-$PROTOC_VERSION-linux-$PROTOC_ARCH.zip
if [ "$EUID" -ne 0 ]; then if [ "$EUID" -ne 0 ]; then
if command -v checkinstall &> /dev/null; then if command -v checkinstall &> /dev/null; then
sudo checkinstall --pkgversion=$VERSION -y cp -r bin include /usr/local/ sudo checkinstall --pkgversion=$PROTOC_VERSION -y cp -r bin include /usr/local/
cp *.deb ~ cp *.deb ~
else else
sudo cp -r bin include /usr/local/ sudo cp -r bin include /usr/local/
@ -16,7 +27,7 @@ if [ "$EUID" -ne 0 ]; then
sudo rm -rf /tmp/protoc-install sudo rm -rf /tmp/protoc-install
else else
if command -v checkinstall &> /dev/null; then if command -v checkinstall &> /dev/null; then
checkinstall --pkgversion=$VERSION -y cp -r bin include /usr/local/ checkinstall --pkgversion=$PROTOC_VERSION -y cp -r bin include /usr/local/
cp *.deb ~ cp *.deb ~
else else
cp -r bin include /usr/local/ cp -r bin include /usr/local/

View File

@ -1,7 +1,7 @@
[package] [package]
# --- Bumpversion match - do not reorder # --- Bumpversion match - do not reorder
name = "veilid-cli" name = "veilid-cli"
version = "0.2.1" version = "0.2.3"
# --- # ---
authors = ["Veilid Team <contact@veilid.com>"] authors = ["Veilid Team <contact@veilid.com>"]
edition = "2021" edition = "2021"
@ -21,13 +21,13 @@ rt-async-std = [
rt-tokio = ["tokio", "tokio-util", "veilid-tools/rt-tokio", "cursive/rt-tokio"] rt-tokio = ["tokio", "tokio-util", "veilid-tools/rt-tokio", "cursive/rt-tokio"]
[dependencies] [dependencies]
async-std = { version = "^1.9", features = [ async-std = { version = "^1.12", features = [
"unstable", "unstable",
"attributes", "attributes",
], optional = true } ], optional = true }
tokio = { version = "^1", features = ["full"], optional = true } tokio = { version = "^1", features = ["full"], optional = true }
tokio-util = { version = "^0", features = ["compat"], optional = true } tokio-util = { version = "^0", features = ["compat"], optional = true }
async-tungstenite = { version = "^0.8" } async-tungstenite = { version = "^0.23" }
cursive = { git = "https://gitlab.com/veilid/cursive.git", default-features = false, features = [ cursive = { git = "https://gitlab.com/veilid/cursive.git", default-features = false, features = [
"crossterm", "crossterm",
"toml", "toml",
@ -38,10 +38,10 @@ cursive_buffered_backend = { git = "https://gitlab.com/veilid/cursive-buffered-b
# cursive-multiplex = "0.6.0" # cursive-multiplex = "0.6.0"
# cursive_tree_view = "0.6.0" # cursive_tree_view = "0.6.0"
cursive_table_view = "0.14.0" cursive_table_view = "0.14.0"
arboard = "3.2.0" arboard = "3.2.1"
# cursive-tabs = "0.5.0" # cursive-tabs = "0.5.0"
clap = { version = "4", features = ["derive"] } clap = { version = "4", features = ["derive"] }
directories = "^4" directories = "^5"
log = "^0" log = "^0"
futures = "^0" futures = "^0"
serde = "^1" serde = "^1"
@ -54,7 +54,7 @@ flexi_logger = { version = "^0", features = ["use_chrono_for_offset"] }
thiserror = "^1" thiserror = "^1"
crossbeam-channel = "^0" crossbeam-channel = "^0"
hex = "^0" hex = "^0"
veilid-tools = { version = "0.2.0", path = "../veilid-tools" } veilid-tools = { version = "0.2.3", path = "../veilid-tools" }
json = "^0" json = "^0"
stop-token = { version = "^0", default-features = false } stop-token = { version = "^0", default-features = false }
@ -65,4 +65,4 @@ indent = { version = "0.1.1" }
chrono = "0.4.26" chrono = "0.4.26"
[dev-dependencies] [dev-dependencies]
serial_test = "^0" serial_test = "^2"

View File

@ -76,7 +76,6 @@ impl ClientApiConnection {
}; };
if let Err(e) = reply_channel.send_async(response).await { if let Err(e) = reply_channel.send_async(response).await {
error!("failed to process reply: {}", e); error!("failed to process reply: {}", e);
return;
} }
} }

View File

@ -248,7 +248,6 @@ Server Debug Commands:
_ => { _ => {
ui.add_node_event(Level::Error, format!("unknown flag: {}", flag)); ui.add_node_event(Level::Error, format!("unknown flag: {}", flag));
ui.send_callback(callback); ui.send_callback(callback);
return;
} }
} }
}); });
@ -271,7 +270,6 @@ Server Debug Commands:
_ => { _ => {
ui.add_node_event(Level::Error, format!("unknown flag: {}", flag)); ui.add_node_event(Level::Error, format!("unknown flag: {}", flag));
ui.send_callback(callback); ui.send_callback(callback);
return;
} }
} }
}); });
@ -399,12 +397,12 @@ Server Debug Commands:
} }
pub fn update_route(&self, route: &json::JsonValue) { pub fn update_route(&self, route: &json::JsonValue) {
let mut out = String::new(); let mut out = String::new();
if route["dead_routes"].len() != 0 { if !route["dead_routes"].is_empty() {
out.push_str(&format!("Dead routes: {:?}", route["dead_routes"])); out.push_str(&format!("Dead routes: {:?}", route["dead_routes"]));
} }
if route["dead_routes"].len() != 0 { if !route["dead_routes"].is_empty() {
if !out.is_empty() { if !out.is_empty() {
out.push_str("\n"); out.push('\n');
} }
out.push_str(&format!( out.push_str(&format!(
"Dead remote routes: {:?}", "Dead remote routes: {:?}",
@ -460,7 +458,7 @@ Server Debug Commands:
}; };
let strmsg = if printable { let strmsg = if printable {
format!("\"{}\"", String::from_utf8_lossy(&message).to_string()) format!("\"{}\"", String::from_utf8_lossy(message))
} else { } else {
hex::encode(message) hex::encode(message)
}; };
@ -498,7 +496,7 @@ Server Debug Commands:
}; };
let strmsg = if printable { let strmsg = if printable {
format!("\"{}\"", String::from_utf8_lossy(&message).to_string()) format!("\"{}\"", String::from_utf8_lossy(message))
} else { } else {
hex::encode(message) hex::encode(message)
}; };

View File

@ -1,4 +1,5 @@
#![deny(clippy::all)] #![deny(clippy::all)]
#![allow(clippy::comparison_chain, clippy::upper_case_acronyms)]
#![deny(unused_must_use)] #![deny(unused_must_use)]
#![recursion_limit = "256"] #![recursion_limit = "256"]
@ -58,7 +59,7 @@ fn main() -> Result<(), String> {
None None
}; };
let mut settings = settings::Settings::new(settings_path.as_ref().map(|x| x.as_os_str())) let mut settings = settings::Settings::new(settings_path.as_deref())
.map_err(|e| format!("configuration is invalid: {}", e))?; .map_err(|e| format!("configuration is invalid: {}", e))?;
// Set config from command line // Set config from command line

View File

@ -58,7 +58,7 @@ impl TableViewItem<PeerTableColumn> for json::JsonValue {
PeerTableColumn::NodeId => self["node_ids"][0].to_string(), PeerTableColumn::NodeId => self["node_ids"][0].to_string(),
PeerTableColumn::Address => self["peer_address"].to_string(), PeerTableColumn::Address => self["peer_address"].to_string(),
PeerTableColumn::LatencyAvg => { PeerTableColumn::LatencyAvg => {
format!("{}", format_ts(&self["peer_stats"]["latency"]["average"])) format_ts(&self["peer_stats"]["latency"]["average"]).to_string()
} }
PeerTableColumn::TransferDownAvg => { PeerTableColumn::TransferDownAvg => {
format_bps(&self["peer_stats"]["transfer"]["down"]["average"]) format_bps(&self["peer_stats"]["transfer"]["down"]["average"])

View File

@ -6,7 +6,7 @@ use std::net::{SocketAddr, ToSocketAddrs};
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
pub fn load_default_config() -> Result<config::Config, config::ConfigError> { pub fn load_default_config() -> Result<config::Config, config::ConfigError> {
let default_config = r###"--- let default_config = r#"---
address: "localhost:5959" address: "localhost:5959"
autoconnect: true autoconnect: true
autoreconnect: true autoreconnect: true
@ -44,7 +44,7 @@ interface:
info : "white" info : "white"
warn : "light yellow" warn : "light yellow"
error : "light red" error : "light red"
"### "#
.replace( .replace(
"%LOGGING_FILE_DIRECTORY%", "%LOGGING_FILE_DIRECTORY%",
&Settings::get_default_log_directory().to_string_lossy(), &Settings::get_default_log_directory().to_string_lossy(),

View File

@ -477,7 +477,11 @@ impl UI {
let color = *Self::inner_mut(s).log_colors.get(&Level::Warn).unwrap(); let color = *Self::inner_mut(s).log_colors.get(&Level::Warn).unwrap();
cursive_flexi_logger_view::parse_lines_to_log( cursive_flexi_logger_view::parse_lines_to_log(
color.into(), color.into(),
<<<<<<< HEAD
format!(">> {} Could not copy to clipboard", UI::cli_ts(Self::get_start_time())), format!(">> {} Could not copy to clipboard", UI::cli_ts(Self::get_start_time())),
=======
">> Could not copy to clipboard".to_string(),
>>>>>>> f59c4509ea7e0c0e8b1088138a6eb5297844b112
); );
} }
} else { } else {
@ -491,7 +495,9 @@ impl UI {
.as_bytes(), .as_bytes(),
) )
.is_ok() .is_ok()
&& std::io::stdout().flush().is_ok()
{ {
<<<<<<< HEAD
if std::io::stdout().flush().is_ok() { if std::io::stdout().flush().is_ok() {
let color = *Self::inner_mut(s).log_colors.get(&Level::Info).unwrap(); let color = *Self::inner_mut(s).log_colors.get(&Level::Info).unwrap();
cursive_flexi_logger_view::parse_lines_to_log( cursive_flexi_logger_view::parse_lines_to_log(
@ -499,6 +505,13 @@ impl UI {
format!(">> {} Copied: {}", UI::cli_ts(Self::get_start_time()), text.as_ref()), format!(">> {} Copied: {}", UI::cli_ts(Self::get_start_time()), text.as_ref()),
); );
} }
=======
let color = *Self::inner_mut(s).log_colors.get(&Level::Info).unwrap();
cursive_flexi_logger_view::parse_lines_to_log(
color.into(),
format!(">> Copied: {}", text.as_ref()),
);
>>>>>>> f59c4509ea7e0c0e8b1088138a6eb5297844b112
} }
} }
} }
@ -531,7 +544,7 @@ impl UI {
let mut reset: bool = false; let mut reset: bool = false;
match state { match state {
ConnectionState::Disconnected => { ConnectionState::Disconnected => {
if inner.connection_dialog_state == None if inner.connection_dialog_state.is_none()
|| inner || inner
.connection_dialog_state .connection_dialog_state
.as_ref() .as_ref()
@ -549,7 +562,7 @@ impl UI {
} }
} }
ConnectionState::Connected(_, _) => { ConnectionState::Connected(_, _) => {
if inner.connection_dialog_state != None if inner.connection_dialog_state.is_some()
&& !inner && !inner
.connection_dialog_state .connection_dialog_state
.as_ref() .as_ref()
@ -560,7 +573,7 @@ impl UI {
} }
} }
ConnectionState::Retrying(_, _) => { ConnectionState::Retrying(_, _) => {
if inner.connection_dialog_state == None if inner.connection_dialog_state.is_none()
|| inner || inner
.connection_dialog_state .connection_dialog_state
.as_ref() .as_ref()
@ -987,10 +1000,12 @@ impl UI {
} }
type CallbackSink = Box<dyn FnOnce(&mut Cursive) + 'static + Send>;
#[derive(Clone)] #[derive(Clone)]
pub struct UISender { pub struct UISender {
inner: Arc<Mutex<UIInner>>, inner: Arc<Mutex<UIInner>>,
cb_sink: Sender<Box<dyn FnOnce(&mut Cursive) + 'static + Send>>, cb_sink: Sender<CallbackSink>,
} }
impl UISender { impl UISender {
@ -1066,7 +1081,7 @@ impl UISender {
for l in 0..node_ids.len() { for l in 0..node_ids.len() {
let nid = &node_ids[l]; let nid = &node_ids[l];
if !node_id_str.is_empty() { if !node_id_str.is_empty() {
node_id_str.push_str(" "); node_id_str.push(' ');
} }
node_id_str.push_str(nid.to_string().as_ref()); node_id_str.push_str(nid.to_string().as_ref());
} }

View File

@ -1,7 +1,7 @@
[package] [package]
# --- Bumpversion match - do not reorder # --- Bumpversion match - do not reorder
name = "veilid-core" name = "veilid-core"
version = "0.2.1" version = "0.2.3"
# --- # ---
description = "Core library used to create a Veilid node and operate it as part of an application" description = "Core library used to create a Veilid node and operate it as part of an application"
authors = ["Veilid Team <contact@veilid.com>"] authors = ["Veilid Team <contact@veilid.com>"]
@ -59,14 +59,14 @@ network-result-extra = ["veilid-tools/network-result-extra"]
[dependencies] [dependencies]
# Tools # Tools
veilid-tools = { version = "0.2.0", path = "../veilid-tools", features = [ veilid-tools = { version = "0.2.3", path = "../veilid-tools", features = [
"tracing", "tracing",
], default-features = false } ], default-features = false }
paste = "1.0.14" paste = "1.0.14"
once_cell = "1.18.0" once_cell = "1.18.0"
owning_ref = "0.4.1" owning_ref = "0.4.1"
backtrace = "0.3.68" backtrace = "0.3.69"
num-traits = "0.2.15" num-traits = "0.2.16"
shell-words = "1.1.0" shell-words = "1.1.0"
static_assertions = "1.1.0" static_assertions = "1.1.0"
cfg-if = "1.0.0" cfg-if = "1.0.0"
@ -79,20 +79,19 @@ tracing = { version = "0.1.37", features = ["log", "attributes"] }
tracing-subscriber = "0.3.17" tracing-subscriber = "0.3.17"
tracing-error = "0.2.0" tracing-error = "0.2.0"
eyre = "0.6.8" eyre = "0.6.8"
thiserror = "1.0.47" thiserror = "1.0.48"
# Data structures # Data structures
enumset = { version = "1.1.2", features = ["serde"] } enumset = { version = "1.1.2", features = ["serde"] }
keyvaluedb = "0.1.0" keyvaluedb = "0.1.1"
range-set-blaze = "0.1.9" range-set-blaze = "0.1.9"
weak-table = "0.3.2" weak-table = "0.3.2"
generic-array = "0.14.7"
hashlink = { package = "veilid-hashlink", version = "0.1.0", features = [ hashlink = { package = "veilid-hashlink", version = "0.1.0", features = [
"serde_impl", "serde_impl",
] } ] }
# System # System
futures-util = { version = "0.3.28", default_features = false, features = [ futures-util = { version = "0.3.28", default-features = false, features = [
"alloc", "alloc",
] } ] }
flume = { version = "0.11.0", features = ["async"] } flume = { version = "0.11.0", features = ["async"] }
@ -101,19 +100,19 @@ lock_api = "0.4.10"
stop-token = { version = "0.7.0", default-features = false } stop-token = { version = "0.7.0", default-features = false }
# Crypto # Crypto
ed25519-dalek = { version = "2.0.0", default_features = false, features = [ ed25519-dalek = { version = "2.0.0", default-features = false, features = [
"alloc", "alloc",
"rand_core", "rand_core",
"digest", "digest",
"zeroize", "zeroize",
] } ] }
x25519-dalek = { version = "2.0.0", default_features = false, features = [ x25519-dalek = { version = "2.0.0", default-features = false, features = [
"alloc", "alloc",
"static_secrets", "static_secrets",
"zeroize", "zeroize",
"precomputed-tables", "precomputed-tables",
] } ] }
curve25519-dalek = { version = "4.0.0", default_features = false, features = [ curve25519-dalek = { version = "4.1.0", default-features = false, features = [
"alloc", "alloc",
"zeroize", "zeroize",
"precomputed-tables", "precomputed-tables",
@ -121,21 +120,21 @@ curve25519-dalek = { version = "4.0.0", default_features = false, features = [
blake3 = { version = "1.4.1" } blake3 = { version = "1.4.1" }
chacha20poly1305 = "0.10.1" chacha20poly1305 = "0.10.1"
chacha20 = "0.9.1" chacha20 = "0.9.1"
argon2 = "0.5.1" argon2 = "0.5.2"
# Network # Network
async-std-resolver = { version = "0.22.0", optional = true } async-std-resolver = { version = "0.23.0", optional = true }
trust-dns-resolver = { version = "0.22.0", optional = true } trust-dns-resolver = { version = "0.23.0", optional = true }
enum-as-inner = "=0.5.1" # temporary fix for trust-dns-resolver v0.22.0 enum-as-inner = "=0.6.0" # temporary fix for trust-dns-resolver v0.22.0
# Serialization # Serialization
capnp = { version = "0.17.2", default_features = false } capnp = { version = "0.18.1", default-features = false, features = ["alloc"] }
serde = { version = "1.0.183", features = ["derive"] } serde = { version = "1.0.188", features = ["derive"] }
serde_json = { version = "1.0.105" } serde_json = { version = "1.0.107" }
serde-big-array = "0.5.1" serde-big-array = "0.5.1"
json = "0.12.4" json = "0.12.4"
data-encoding = { version = "2.4.0" } data-encoding = { version = "2.4.0" }
schemars = "0.8.12" schemars = "0.8.13"
lz4_flex = { version = "0.11.1", default-features = false, features = [ lz4_flex = { version = "0.11.1", default-features = false, features = [
"safe-encode", "safe-encode",
"safe-decode", "safe-decode",
@ -148,9 +147,9 @@ lz4_flex = { version = "0.11.1", default-features = false, features = [
# Tools # Tools
config = { version = "0.13.3", features = ["yaml"] } config = { version = "0.13.3", features = ["yaml"] }
bugsalot = { package = "veilid-bugsalot", version = "0.1.0" } bugsalot = { package = "veilid-bugsalot", version = "0.1.0" }
chrono = "0.4.26" chrono = "0.4.31"
libc = "0.2.147" libc = "0.2.148"
nix = "0.26.2" nix = "0.27.1"
# System # System
async-std = { version = "1.12.0", features = ["unstable"], optional = true } async-std = { version = "1.12.0", features = ["unstable"], optional = true }
@ -167,27 +166,27 @@ futures-util = { version = "0.3.28", default-features = false, features = [
# Data structures # Data structures
keyring-manager = "0.5.0" keyring-manager = "0.5.0"
keyvaluedb-sqlite = "0.1.0" keyvaluedb-sqlite = "0.1.1"
# Network # Network
async-tungstenite = { version = "0.23.0", features = ["async-tls"] } async-tungstenite = { version = "0.23.0", features = ["async-tls"] }
igd = { package = "veilid-igd", version = "0.1.0" } igd = { package = "veilid-igd", version = "0.1.0" }
async-tls = "0.12.0" async-tls = "0.12.0"
webpki = "0.22.0" webpki = "0.22.1"
webpki-roots = "0.25.2" webpki-roots = "0.25.2"
rustls = "0.20.8" rustls = "=0.20.9"
rustls-pemfile = "1.0.3" rustls-pemfile = "1.0.3"
socket2 = { version = "0.5.3", features = ["all"] } socket2 = { version = "0.5.4", features = ["all"] }
# Dependencies for WASM builds only # Dependencies for WASM builds only
[target.'cfg(target_arch = "wasm32")'.dependencies] [target.'cfg(target_arch = "wasm32")'.dependencies]
veilid-tools = { version = "0.2.0", path = "../veilid-tools", default-features = false, features = [ veilid-tools = { version = "0.2.3", path = "../veilid-tools", default-features = false, features = [
"rt-wasm-bindgen", "rt-wasm-bindgen",
] } ] }
# Tools # Tools
getrandom = { version = "0.2.4", features = ["js"] } getrandom = { version = "0.2.10", features = ["js"] }
# System # System
async_executors = { version = "0.7.0", default-features = false, features = [ async_executors = { version = "0.7.0", default-features = false, features = [
@ -199,8 +198,11 @@ wasm-bindgen = "0.2.87"
js-sys = "0.3.64" js-sys = "0.3.64"
wasm-bindgen-futures = "0.4.37" wasm-bindgen-futures = "0.4.37"
send_wrapper = { version = "0.6.0", features = ["futures"] } send_wrapper = { version = "0.6.0", features = ["futures"] }
serde_bytes = { version = "0.11", default_features = false, features = [
"alloc",
] }
tsify = { version = "0.4.5", features = ["js"] } tsify = { version = "0.4.5", features = ["js"] }
serde-wasm-bindgen = "0.5.0" serde-wasm-bindgen = "0.6.0"
# Network # Network
ws_stream_wasm = "0.7.4" ws_stream_wasm = "0.7.4"
@ -210,7 +212,7 @@ wasm-logger = "0.2.0"
tracing-wasm = "0.2.1" tracing-wasm = "0.2.1"
# Data Structures # Data Structures
keyvaluedb-web = "0.1.0" keyvaluedb-web = "0.1.1"
### Configuration for WASM32 'web-sys' crate ### Configuration for WASM32 'web-sys' crate
[target.'cfg(target_arch = "wasm32")'.dependencies.web-sys] [target.'cfg(target_arch = "wasm32")'.dependencies.web-sys]
@ -242,9 +244,9 @@ ifstructs = "0.1.1"
# Dependencies for Linux or Android # Dependencies for Linux or Android
[target.'cfg(any(target_os = "android", target_os = "linux"))'.dependencies] [target.'cfg(any(target_os = "android", target_os = "linux"))'.dependencies]
rtnetlink = { version = "=0.13.0", default-features = false } rtnetlink = { version = "=0.13.1", default-features = false }
netlink-sys = { version = "=0.8.5" } netlink-sys = { version = "=0.8.5" }
netlink-packet-route = { version = "=0.17.0" } netlink-packet-route = { version = "=0.17.1" }
# Dependencies for Windows # Dependencies for Windows
[target.'cfg(target_os = "windows")'.dependencies] [target.'cfg(target_os = "windows")'.dependencies]
@ -259,12 +261,6 @@ windows-permissions = "0.2.4"
[target.'cfg(target_os = "ios")'.dependencies] [target.'cfg(target_os = "ios")'.dependencies]
tracing-oslog = { version = "0.1.2", optional = true } tracing-oslog = { version = "0.1.2", optional = true }
# Rusqlite configuration to ensure platforms that don't come with sqlite get it bundled
# Except WASM which doesn't use sqlite
[target.'cfg(all(not(target_os = "ios"),not(target_os = "android"),not(target_arch = "wasm32")))'.dependencies.rusqlite]
version = "0.29.0"
features = ["bundled"]
### DEV DEPENDENCIES ### DEV DEPENDENCIES
[dev-dependencies] [dev-dependencies]
@ -282,7 +278,7 @@ wasm-logger = "0.2.0"
### BUILD OPTIONS ### BUILD OPTIONS
[build-dependencies] [build-dependencies]
capnpc = "0.17.2" capnpc = "0.18.0"
[package.metadata.wasm-pack.profile.release] [package.metadata.wasm-pack.profile.release]
wasm-opt = ["-O", "--enable-mutable-globals"] wasm-opt = ["-O", "--enable-mutable-globals"]

View File

@ -1,6 +1,114 @@
fn main() { use std::process::{Command, Stdio};
::capnpc::CompilerCommand::new()
.file("proto/veilid.capnp") const CAPNP_VERSION: &str = "1.0.1"; // Keep in sync with scripts/install_capnp.sh
.run() const PROTOC_VERSION: &str = "24.3"; // Keep in sync with scripts/install_protoc.sh
.expect("compiling schema");
fn get_desired_capnp_version_string() -> String {
CAPNP_VERSION.to_string()
}
fn get_desired_protoc_version_string() -> String {
PROTOC_VERSION.to_string()
}
fn get_capnp_version_string() -> String {
let output = Command::new("capnp")
.arg("--version")
.stdout(Stdio::piped())
.output()
.expect("capnp was not in the PATH");
let s = String::from_utf8(output.stdout)
.expect("'capnp --version' output was not a valid string")
.trim()
.to_owned();
if !s.starts_with("Cap'n Proto version ") {
panic!("invalid capnp version string: {}", s);
}
s[20..].to_owned()
}
fn get_protoc_version_string() -> String {
let output = Command::new("protoc")
.arg("--version")
.stdout(Stdio::piped())
.output()
.expect("protoc was not in the PATH");
let s = String::from_utf8(output.stdout)
.expect("'protoc --version' output was not a valid string")
.trim()
.to_owned();
if !s.starts_with("libprotoc ") {
panic!("invalid protoc version string: {}", s);
}
s[10..].to_owned()
}
fn main() {
#[cfg(doc)]
return;
#[cfg(not(doc))]
{
let desired_capnp_version_string = get_desired_capnp_version_string();
let capnp_version_string = get_capnp_version_string();
let desired_protoc_version_string = get_desired_protoc_version_string();
let protoc_version_string = get_protoc_version_string();
// Check capnp version
let desired_capnp_major_version = desired_capnp_version_string
.split_once('.')
.unwrap()
.0
.parse::<usize>()
.expect("should be valid int");
if capnp_version_string
.split_once('.')
.unwrap()
.0
.parse::<usize>()
.expect("should be valid int")
!= desired_capnp_major_version
{
panic!(
"capnproto version should be major version 1, preferably {} but is {}",
desired_capnp_version_string, capnp_version_string
);
} else if capnp_version_string != desired_capnp_version_string {
println!(
"capnproto version may be untested: {}",
capnp_version_string
);
}
// Check protoc version
let desired_protoc_major_version = desired_protoc_version_string
.split_once('.')
.unwrap()
.0
.parse::<usize>()
.expect("should be valid int");
if protoc_version_string
.split_once('.')
.unwrap()
.0
.parse::<usize>()
.expect("should be valid int")
< desired_protoc_major_version
{
panic!(
"protoc version should be at least major version {} but is {}",
desired_protoc_major_version, protoc_version_string
);
} else if protoc_version_string != desired_protoc_version_string {
println!("protoc version may be untested: {}", protoc_version_string);
}
::capnpc::CompilerCommand::new()
.file("proto/veilid.capnp")
.run()
.expect("compiling schema");
}
} }

View File

@ -30,8 +30,8 @@ elif [[ "$1" == "ios" ]]; then
elif [[ "$1" == "android" ]]; then elif [[ "$1" == "android" ]]; then
ID="$2" ID="$2"
if [[ "$ID" == "" ]]; then if [[ "$ID" == "" ]]; then
echo "No emulator ID specified" echo "No emulator ID specified, trying 'emulator-5554'"
exit 1 ID="emulator-5554"
fi fi
APPNAME=veilid_core_android_tests APPNAME=veilid_core_android_tests
APPID=com.veilid.veilid_core_android_tests APPID=com.veilid.veilid_core_android_tests

View File

@ -103,11 +103,11 @@ impl<S: Subscriber + for<'a> registry::LookupSpan<'a>> Layer<S> for ApiTracingLa
None None
}; };
(inner.update_callback)(VeilidUpdate::Log(VeilidLog { (inner.update_callback)(VeilidUpdate::Log(Box::new(VeilidLog {
log_level, log_level,
message, message,
backtrace, backtrace,
})) })))
} }
} }
} }

View File

@ -168,7 +168,7 @@ impl AttachmentManager {
}) })
.unwrap_or(true); .unwrap_or(true);
if send_update { if send_update {
Some((update_callback, Self::get_veilid_state_inner(&*inner))) Some((update_callback, Self::get_veilid_state_inner(&inner)))
} else { } else {
None None
} }
@ -197,11 +197,11 @@ impl AttachmentManager {
}; };
if let Some(update_callback) = update_callback { if let Some(update_callback) = update_callback {
update_callback(VeilidUpdate::Attachment(VeilidStateAttachment { update_callback(VeilidUpdate::Attachment(Box::new(VeilidStateAttachment {
state, state,
public_internet_ready: false, public_internet_ready: false,
local_network_ready: false, local_network_ready: false,
})) })))
} }
} }
@ -325,8 +325,8 @@ impl AttachmentManager {
// self.inner.lock().last_attachment_state // self.inner.lock().last_attachment_state
// } // }
fn get_veilid_state_inner(inner: &AttachmentManagerInner) -> VeilidStateAttachment { fn get_veilid_state_inner(inner: &AttachmentManagerInner) -> Box<VeilidStateAttachment> {
VeilidStateAttachment { Box::new(VeilidStateAttachment {
state: inner.last_attachment_state, state: inner.last_attachment_state,
public_internet_ready: inner public_internet_ready: inner
.last_routing_table_health .last_routing_table_health
@ -338,11 +338,11 @@ impl AttachmentManager {
.as_ref() .as_ref()
.map(|x| x.local_network_ready) .map(|x| x.local_network_ready)
.unwrap_or(false), .unwrap_or(false),
} })
} }
pub fn get_veilid_state(&self) -> VeilidStateAttachment { pub fn get_veilid_state(&self) -> Box<VeilidStateAttachment> {
let inner = self.inner.lock(); let inner = self.inner.lock();
Self::get_veilid_state_inner(&*inner) Self::get_veilid_state_inner(&inner)
} }
} }

View File

@ -1,8 +1,7 @@
use curve25519_dalek::digest::generic_array::typenum::U64; use curve25519_dalek::digest::generic_array::{typenum::U64, GenericArray};
use curve25519_dalek::digest::{ use curve25519_dalek::digest::{
Digest, FixedOutput, FixedOutputReset, Output, OutputSizeUser, Reset, Update, Digest, FixedOutput, FixedOutputReset, Output, OutputSizeUser, Reset, Update,
}; };
use generic_array::GenericArray;
pub struct Blake3Digest512 { pub struct Blake3Digest512 {
dig: blake3::Hasher, dig: blake3::Hasher,

View File

@ -236,7 +236,7 @@ impl Envelope {
} }
// Compress body // Compress body
let body = compress_prepend_size(&body); let body = compress_prepend_size(body);
// Ensure body isn't too long // Ensure body isn't too long
let envelope_size: usize = body.len() + MIN_ENVELOPE_SIZE; let envelope_size: usize = body.len() + MIN_ENVELOPE_SIZE;

View File

@ -8,10 +8,10 @@ use crate::tests::common::test_veilid_config::*;
async fn crypto_tests_startup() -> VeilidAPI { async fn crypto_tests_startup() -> VeilidAPI {
trace!("crypto_tests: starting"); trace!("crypto_tests: starting");
let (update_callback, config_callback) = setup_veilid_core(); let (update_callback, config_callback) = setup_veilid_core();
let api = api_startup(update_callback, config_callback)
api_startup(update_callback, config_callback)
.await .await
.expect("startup failed"); .expect("startup failed")
api
} }
async fn crypto_tests_shutdown(api: VeilidAPI) { async fn crypto_tests_shutdown(api: VeilidAPI) {

View File

@ -1,5 +1,3 @@
#![allow(clippy::bool_assert_comparison)]
use super::*; use super::*;
use core::convert::TryFrom; use core::convert::TryFrom;
@ -228,7 +226,7 @@ pub async fn test_encode_decode(vcrypto: CryptoSystemVersion) {
pub async fn test_typed_convert(vcrypto: CryptoSystemVersion) { pub async fn test_typed_convert(vcrypto: CryptoSystemVersion) {
let tks1 = format!( let tks1 = format!(
"{}:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ", "{}:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ",
vcrypto.kind().to_string() vcrypto.kind()
); );
let tk1 = TypedKey::from_str(&tks1).expect("failed"); let tk1 = TypedKey::from_str(&tks1).expect("failed");
let tks1x = tk1.to_string(); let tks1x = tk1.to_string();
@ -236,22 +234,22 @@ pub async fn test_typed_convert(vcrypto: CryptoSystemVersion) {
let tks2 = format!( let tks2 = format!(
"{}:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzd", "{}:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzd",
vcrypto.kind().to_string() vcrypto.kind()
); );
let _tk2 = TypedKey::from_str(&tks2).expect_err("succeeded when it shouldnt have"); let _tk2 = TypedKey::from_str(&tks2).expect_err("succeeded when it shouldnt have");
let tks3 = format!("XXXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ",); let tks3 = "XXXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ".to_string();
let tk3 = TypedKey::from_str(&tks3).expect("failed"); let tk3 = TypedKey::from_str(&tks3).expect("failed");
let tks3x = tk3.to_string(); let tks3x = tk3.to_string();
assert_eq!(tks3, tks3x); assert_eq!(tks3, tks3x);
let tks4 = format!("XXXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzd",); let tks4 = "XXXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzd".to_string();
let _tk4 = TypedKey::from_str(&tks4).expect_err("succeeded when it shouldnt have"); let _tk4 = TypedKey::from_str(&tks4).expect_err("succeeded when it shouldnt have");
let tks5 = format!("XXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ",); let tks5 = "XXX:7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ".to_string();
let _tk5 = TypedKey::from_str(&tks5).expect_err("succeeded when it shouldnt have"); let _tk5 = TypedKey::from_str(&tks5).expect_err("succeeded when it shouldnt have");
let tks6 = format!("7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ",); let tks6 = "7lxDEabK_qgjbe38RtBa3IZLrud84P6NhGP-pRTZzdQ".to_string();
let tk6 = TypedKey::from_str(&tks6).expect("failed"); let tk6 = TypedKey::from_str(&tks6).expect("failed");
let tks6x = tk6.to_string(); let tks6x = tk6.to_string();
assert!(tks6x.ends_with(&tks6)); assert!(tks6x.ends_with(&tks6));
@ -338,14 +336,14 @@ async fn test_operations(vcrypto: CryptoSystemVersion) {
assert_eq!(d4.first_nonzero_nibble(), Some((0, 0x9u8))); assert_eq!(d4.first_nonzero_nibble(), Some((0, 0x9u8)));
// Verify bits // Verify bits
assert_eq!(d1.bit(0), true); assert!(d1.bit(0));
assert_eq!(d1.bit(1), false); assert!(!d1.bit(1));
assert_eq!(d1.bit(7), false); assert!(!d1.bit(7));
assert_eq!(d1.bit(8), false); assert!(!d1.bit(8));
assert_eq!(d1.bit(14), true); assert!(d1.bit(14));
assert_eq!(d1.bit(15), false); assert!(!d1.bit(15));
assert_eq!(d1.bit(254), true); assert!(d1.bit(254));
assert_eq!(d1.bit(255), false); assert!(!d1.bit(255));
assert_eq!(d1.first_nonzero_bit(), Some(0)); assert_eq!(d1.first_nonzero_bit(), Some(0));
assert_eq!(d2.first_nonzero_bit(), Some(0)); assert_eq!(d2.first_nonzero_bit(), Some(0));

View File

@ -77,7 +77,7 @@ where
macro_rules! byte_array_type { macro_rules! byte_array_type {
($name:ident, $size:expr, $encoded_size:expr) => { ($name:ident, $size:expr, $encoded_size:expr) => {
#[derive(Clone, Copy, Hash)] #[derive(Clone, Copy, Hash, PartialOrd, Ord, PartialEq, Eq)]
#[cfg_attr(target_arch = "wasm32", derive(Tsify), tsify(into_wasm_abi))] #[cfg_attr(target_arch = "wasm32", derive(Tsify), tsify(into_wasm_abi))]
pub struct $name { pub struct $name {
pub bytes: [u8; $size], pub bytes: [u8; $size],
@ -114,32 +114,6 @@ macro_rules! byte_array_type {
} }
} }
impl PartialOrd for $name {
fn partial_cmp(&self, other: &Self) -> Option<core::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl Ord for $name {
fn cmp(&self, other: &Self) -> core::cmp::Ordering {
for n in 0..$size {
let c = self.bytes[n].cmp(&other.bytes[n]);
if c != core::cmp::Ordering::Equal {
return c;
}
}
core::cmp::Ordering::Equal
}
}
impl PartialEq for $name {
fn eq(&self, other: &Self) -> bool {
self.bytes == other.bytes
}
}
impl Eq for $name {}
impl $name { impl $name {
pub fn new(bytes: [u8; $size]) -> Self { pub fn new(bytes: [u8; $size]) -> Self {
Self { bytes } Self { bytes }

View File

@ -1,7 +1,6 @@
use super::*; use super::*;
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
#[cfg_attr(target_arch = "wasm32", derive(Tsify), tsify(into_wasm_abi))]
pub struct CryptoTyped<K> pub struct CryptoTyped<K>
where where
K: Clone K: Clone

View File

@ -1,9 +1,7 @@
use super::*; use super::*;
#[derive(Clone, Debug, Serialize, Deserialize, PartialOrd, Ord, PartialEq, Eq, Hash, Default)] #[derive(Clone, Debug, Serialize, Deserialize, PartialOrd, Ord, PartialEq, Eq, Hash, Default)]
#[cfg_attr(target_arch = "wasm32", derive(Tsify))]
#[serde(from = "Vec<CryptoTyped<K>>", into = "Vec<CryptoTyped<K>>")] #[serde(from = "Vec<CryptoTyped<K>>", into = "Vec<CryptoTyped<K>>")]
// TODO: figure out hot to TS type this as `string`, since it's converted to string via the JSON API.
pub struct CryptoTypedGroup<K = PublicKey> pub struct CryptoTypedGroup<K = PublicKey>
where where
K: Clone K: Clone
@ -95,16 +93,13 @@ where
} }
/// Return preferred typed key of our supported crypto kinds /// Return preferred typed key of our supported crypto kinds
pub fn best(&self) -> Option<CryptoTyped<K>> { pub fn best(&self) -> Option<CryptoTyped<K>> {
match self.items.first().copied() { self.items
None => None, .first()
Some(k) => { .copied()
if !VALID_CRYPTO_KINDS.contains(&k.kind) { .filter(|k| VALID_CRYPTO_KINDS.contains(&k.kind))
None }
} else { pub fn is_empty(&self) -> bool {
Some(k) self.items.is_empty()
}
}
}
} }
pub fn len(&self) -> usize { pub fn len(&self) -> usize {
self.items.len() self.items.len()
@ -206,7 +201,7 @@ where
if &s[0..1] != "[" || &s[(s.len() - 1)..] != "]" { if &s[0..1] != "[" || &s[(s.len() - 1)..] != "]" {
apibail_parse_error!("invalid format", s); apibail_parse_error!("invalid format", s);
} }
for x in s[1..s.len() - 1].split(",") { for x in s[1..s.len() - 1].split(',') {
let tk = CryptoTyped::<K>::from_str(x.trim())?; let tk = CryptoTyped::<K>::from_str(x.trim())?;
items.push(tk); items.push(tk);
} }
@ -274,7 +269,7 @@ where
tks tks
} }
} }
impl<K> Into<Vec<CryptoTyped<K>>> for CryptoTypedGroup<K> impl<K> From<CryptoTypedGroup<K>> for Vec<CryptoTyped<K>>
where where
K: Clone K: Clone
+ Copy + Copy
@ -288,7 +283,7 @@ where
+ Hash + Hash
+ Encodable, + Encodable,
{ {
fn into(self) -> Vec<CryptoTyped<K>> { fn from(val: CryptoTypedGroup<K>) -> Self {
self.items val.items
} }
} }

View File

@ -7,9 +7,7 @@ use super::*;
tsify(from_wasm_abi, into_wasm_abi) tsify(from_wasm_abi, into_wasm_abi)
)] )]
pub struct KeyPair { pub struct KeyPair {
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
pub key: PublicKey, pub key: PublicKey,
#[cfg_attr(target_arch = "wasm32", tsify(type = "string"))]
pub secret: SecretKey, pub secret: SecretKey,
} }
from_impl_to_jsvalue!(KeyPair); from_impl_to_jsvalue!(KeyPair);
@ -98,7 +96,7 @@ impl<'de> serde::Deserialize<'de> for KeyPair {
D: serde::Deserializer<'de>, D: serde::Deserializer<'de>,
{ {
let s = <String as serde::Deserialize>::deserialize(deserializer)?; let s = <String as serde::Deserialize>::deserialize(deserializer)?;
if s == "" { if s.is_empty() {
return Ok(KeyPair::default()); return Ok(KeyPair::default());
} }
KeyPair::try_decode(s.as_str()).map_err(serde::de::Error::custom) KeyPair::try_decode(s.as_str()).map_err(serde::de::Error::custom)

View File

@ -134,8 +134,8 @@ impl CryptoSystem for CryptoSystemVLD0 {
SharedSecret::new(s) SharedSecret::new(s)
} }
fn compute_dh(&self, key: &PublicKey, secret: &SecretKey) -> VeilidAPIResult<SharedSecret> { fn compute_dh(&self, key: &PublicKey, secret: &SecretKey) -> VeilidAPIResult<SharedSecret> {
let pk_xd = public_to_x25519_pk(&key)?; let pk_xd = public_to_x25519_pk(key)?;
let sk_xd = secret_to_x25519_sk(&secret)?; let sk_xd = secret_to_x25519_sk(secret)?;
let dh_bytes = sk_xd.diffie_hellman(&pk_xd).to_bytes(); let dh_bytes = sk_xd.diffie_hellman(&pk_xd).to_bytes();
@ -188,9 +188,9 @@ impl CryptoSystem for CryptoSystemVLD0 {
fn distance(&self, key1: &PublicKey, key2: &PublicKey) -> CryptoKeyDistance { fn distance(&self, key1: &PublicKey, key2: &PublicKey) -> CryptoKeyDistance {
let mut bytes = [0u8; CRYPTO_KEY_LENGTH]; let mut bytes = [0u8; CRYPTO_KEY_LENGTH];
for n in 0..CRYPTO_KEY_LENGTH { (0..CRYPTO_KEY_LENGTH).for_each(|n| {
bytes[n] = key1.bytes[n] ^ key2.bytes[n]; bytes[n] = key1.bytes[n] ^ key2.bytes[n];
} });
CryptoKeyDistance::new(bytes) CryptoKeyDistance::new(bytes)
} }
@ -219,7 +219,7 @@ impl CryptoSystem for CryptoSystemVLD0 {
let sig = Signature::new(sig_bytes.to_bytes()); let sig = Signature::new(sig_bytes.to_bytes());
self.verify(dht_key, &data, &sig)?; self.verify(dht_key, data, &sig)?;
Ok(sig) Ok(sig)
} }

View File

@ -9,4 +9,4 @@ mod native;
#[cfg(not(target_arch = "wasm32"))] #[cfg(not(target_arch = "wasm32"))]
pub use native::*; pub use native::*;
pub static KNOWN_PROTECTED_STORE_KEYS: [&'static str; 2] = ["device_encryption_key", "_test_key"]; pub static KNOWN_PROTECTED_STORE_KEYS: [&str; 2] = ["device_encryption_key", "_test_key"];

View File

@ -324,7 +324,7 @@ impl PlatformSupportApple {
let intf_index = unsafe { (*rt).rtm_index } as u32; let intf_index = unsafe { (*rt).rtm_index } as u32;
// Fill in sockaddr table // Fill in sockaddr table
for i in 0..(RTAX_MAX as usize) { (0..(RTAX_MAX as usize)).for_each(|i| {
if rtm_addrs & (1 << i) != 0 { if rtm_addrs & (1 << i) != 0 {
sa_tab[i] = sa; sa_tab[i] = sa;
sa = unsafe { sa = unsafe {
@ -333,7 +333,7 @@ impl PlatformSupportApple {
sa sa
}; };
} }
} });
// Look for gateways // Look for gateways
if rtm_addrs & (RTA_DST | RTA_GATEWAY) == (RTA_DST | RTA_GATEWAY) { if rtm_addrs & (RTA_DST | RTA_GATEWAY) == (RTA_DST | RTA_GATEWAY) {
@ -373,7 +373,7 @@ impl PlatformSupportApple {
} }
fn get_address_flags(ifname: &str, addr: sockaddr_in6) -> EyreResult<AddressFlags> { fn get_address_flags(ifname: &str, addr: sockaddr_in6) -> EyreResult<AddressFlags> {
let mut req = in6_ifreq::from_name(&ifname).unwrap(); let mut req = in6_ifreq::from_name(ifname).unwrap();
req.set_addr(addr); req.set_addr(addr);
let sock = unsafe { socket(AF_INET6, SOCK_DGRAM, 0) }; let sock = unsafe { socket(AF_INET6, SOCK_DGRAM, 0) };

View File

@ -359,7 +359,7 @@ impl NetworkInterfaces {
let old_best_addresses = inner.interface_address_cache.clone(); let old_best_addresses = inner.interface_address_cache.clone();
// redo the address cache // redo the address cache
Self::cache_best_addresses(&mut *inner); Self::cache_best_addresses(&mut inner);
// See if our best addresses have changed // See if our best addresses have changed
if old_best_addresses != inner.interface_address_cache { if old_best_addresses != inner.interface_address_cache {

View File

@ -122,7 +122,7 @@ impl PlatformSupportNetlink {
} }
cfg_if! { cfg_if! {
if #[cfg(target_os = "android")] { if #[cfg(any(target_os = "android", target_env = "musl"))] {
let res = unsafe { ioctl(sock, SIOCGIFFLAGS as i32, &mut req) }; let res = unsafe { ioctl(sock, SIOCGIFFLAGS as i32, &mut req) };
} else { } else {
let res = unsafe { ioctl(sock, SIOCGIFFLAGS, &mut req) }; let res = unsafe { ioctl(sock, SIOCGIFFLAGS, &mut req) };

View File

@ -25,7 +25,7 @@ cfg_if! {
pub async fn resolver( pub async fn resolver(
config: config::ResolverConfig, config: config::ResolverConfig,
options: config::ResolverOpts, options: config::ResolverOpts,
) -> Result<AsyncResolver, ResolveError> { ) -> AsyncResolver {
AsyncResolver::tokio(config, options) AsyncResolver::tokio(config, options)
} }
@ -62,7 +62,6 @@ cfg_if! {
config::ResolverOpts::default(), config::ResolverOpts::default(),
) )
.await .await
.expect("failed to connect resolver"),
}; };
*resolver_lock = Some(resolver.clone()); *resolver_lock = Some(resolver.clone());

View File

@ -69,14 +69,11 @@ impl ProtectedStore {
let vkey = self.browser_key_name(key.as_ref()); let vkey = self.browser_key_name(key.as_ref());
let prev = match ls let prev = ls
.get_item(&vkey) .get_item(&vkey)
.map_err(map_jsvalue_error) .map_err(map_jsvalue_error)
.wrap_err("exception_thrown")? .wrap_err("exception_thrown")?
{ .is_some();
Some(_) => true,
None => false,
};
ls.set_item(&vkey, value.as_ref()) ls.set_item(&vkey, value.as_ref())
.map_err(map_jsvalue_error) .map_err(map_jsvalue_error)

View File

@ -22,6 +22,7 @@
//! //!
#![deny(clippy::all)] #![deny(clippy::all)]
#![allow(clippy::comparison_chain, clippy::upper_case_acronyms)]
#![deny(unused_must_use)] #![deny(unused_must_use)]
#![recursion_limit = "256"] #![recursion_limit = "256"]

View File

@ -244,12 +244,12 @@ impl AddressFilter {
self.unlocked_inner.max_connections_per_ip6_prefix_size, self.unlocked_inner.max_connections_per_ip6_prefix_size,
addr, addr,
); );
self.is_ip_addr_punished_inner(&*inner, ipblock) self.is_ip_addr_punished_inner(&inner, ipblock)
} }
pub fn get_dial_info_failed_ts(&self, dial_info: &DialInfo) -> Option<Timestamp> { pub fn get_dial_info_failed_ts(&self, dial_info: &DialInfo) -> Option<Timestamp> {
let inner = self.inner.lock(); let inner = self.inner.lock();
self.get_dial_info_failed_ts_inner(&*inner, dial_info) self.get_dial_info_failed_ts_inner(&inner, dial_info)
} }
pub fn set_dial_info_failed(&self, dial_info: DialInfo) { pub fn set_dial_info_failed(&self, dial_info: DialInfo) {
@ -301,7 +301,7 @@ impl AddressFilter {
pub fn is_node_id_punished(&self, node_id: TypedKey) -> bool { pub fn is_node_id_punished(&self, node_id: TypedKey) -> bool {
let inner = self.inner.lock(); let inner = self.inner.lock();
self.is_node_id_punished_inner(&*inner, node_id) self.is_node_id_punished_inner(&inner, node_id)
} }
pub fn punish_node_id(&self, node_id: TypedKey) { pub fn punish_node_id(&self, node_id: TypedKey) {
@ -333,8 +333,8 @@ impl AddressFilter {
) -> EyreResult<()> { ) -> EyreResult<()> {
// //
let mut inner = self.inner.lock(); let mut inner = self.inner.lock();
self.purge_old_timestamps(&mut *inner, cur_ts); self.purge_old_timestamps(&mut inner, cur_ts);
self.purge_old_punishments(&mut *inner, cur_ts); self.purge_old_punishments(&mut inner, cur_ts);
Ok(()) Ok(())
} }
@ -411,7 +411,7 @@ impl AddressFilter {
); );
let ts = get_aligned_timestamp(); let ts = get_aligned_timestamp();
self.purge_old_timestamps(&mut *inner, ts); self.purge_old_timestamps(&mut inner, ts);
match ipblock { match ipblock {
IpAddr::V4(v4) => { IpAddr::V4(v4) => {

View File

@ -31,7 +31,7 @@ impl ConnectionHandle {
} }
pub fn connection_descriptor(&self) -> ConnectionDescriptor { pub fn connection_descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
#[cfg_attr(feature="verbose-tracing", instrument(level="trace", skip(self, message), fields(message.len = message.len())))] #[cfg_attr(feature="verbose-tracing", instrument(level="trace", skip(self, message), fields(message.len = message.len())))]

View File

@ -117,13 +117,12 @@ impl ConnectionManager {
// Remove the inner from the lock // Remove the inner from the lock
let mut inner = { let mut inner = {
let mut inner_lock = self.arc.inner.lock(); let mut inner_lock = self.arc.inner.lock();
let inner = match inner_lock.take() { match inner_lock.take() {
Some(v) => v, Some(v) => v,
None => { None => {
panic!("not started"); panic!("not started");
} }
}; }
inner
}; };
// Stop all the connections and the async processor // Stop all the connections and the async processor
@ -139,6 +138,33 @@ impl ConnectionManager {
debug!("finished connection manager shutdown"); debug!("finished connection manager shutdown");
} }
// Internal routine to see if we should keep this connection
// from being LRU removed. Used on our initiated relay connections.
fn should_protect_connection(&self, conn: &NetworkConnection) -> bool {
let netman = self.network_manager();
let routing_table = netman.routing_table();
let remote_address = conn.connection_descriptor().remote_address().address();
let Some(routing_domain) = routing_table.routing_domain_for_address(remote_address) else {
return false;
};
let Some(rn) = routing_table.relay_node(routing_domain) else {
return false;
};
let relay_nr = rn.filtered_clone(
NodeRefFilter::new()
.with_routing_domain(routing_domain)
.with_address_type(conn.connection_descriptor().address_type())
.with_protocol_type(conn.connection_descriptor().protocol_type()),
);
let dids = relay_nr.all_filtered_dial_info_details();
for did in dids {
if did.dial_info.address() == remote_address {
return true;
}
}
false
}
// Internal routine to register new connection atomically. // Internal routine to register new connection atomically.
// Registers connection in the connection table for later access // Registers connection in the connection table for later access
// and spawns a message processing loop for the connection // and spawns a message processing loop for the connection
@ -163,8 +189,16 @@ impl ConnectionManager {
None => bail!("not creating connection because we are stopping"), None => bail!("not creating connection because we are stopping"),
}; };
let conn = NetworkConnection::from_protocol(self.clone(), stop_token, prot_conn, id); let mut conn = NetworkConnection::from_protocol(self.clone(), stop_token, prot_conn, id);
let handle = conn.get_handle(); let handle = conn.get_handle();
// See if this should be a protected connection
let protect = self.should_protect_connection(&conn);
if protect {
log_net!(debug "== PROTECTING connection: {} -> {}", id, conn.debug_print(get_aligned_timestamp()));
conn.protect();
}
// Add to the connection table // Add to the connection table
match self.arc.connection_table.add_connection(conn) { match self.arc.connection_table.add_connection(conn) {
Ok(None) => { Ok(None) => {
@ -173,7 +207,7 @@ impl ConnectionManager {
Ok(Some(conn)) => { Ok(Some(conn)) => {
// Connection added and a different one LRU'd out // Connection added and a different one LRU'd out
// Send it to be terminated // Send it to be terminated
log_net!(debug "== LRU kill connection due to limit: {:?}", conn); // log_net!(debug "== LRU kill connection due to limit: {:?}", conn);
let _ = inner.sender.send(ConnectionManagerEvent::Dead(conn)); let _ = inner.sender.send(ConnectionManagerEvent::Dead(conn));
} }
Err(ConnectionTableAddError::AddressFilter(conn, e)) => { Err(ConnectionTableAddError::AddressFilter(conn, e)) => {
@ -215,8 +249,8 @@ impl ConnectionManager {
&self, &self,
dial_info: DialInfo, dial_info: DialInfo,
) -> EyreResult<NetworkResult<ConnectionHandle>> { ) -> EyreResult<NetworkResult<ConnectionHandle>> {
let peer_address = dial_info.to_peer_address(); let peer_address = dial_info.peer_address();
let remote_addr = peer_address.to_socket_addr(); let remote_addr = peer_address.socket_addr();
let mut preferred_local_address = self let mut preferred_local_address = self
.network_manager() .network_manager()
.net() .net()
@ -267,26 +301,6 @@ impl ConnectionManager {
.await; .await;
match result_net_res { match result_net_res {
Ok(net_res) => { Ok(net_res) => {
// // If the connection 'already exists', then try one last time to return a connection from the table, in case
// // an 'accept' happened at literally the same time as our connect. A preferred local address must have been
// // specified otherwise we would have picked a different ephemeral port and this could not have happened
// if net_res.is_already_exists() && preferred_local_address.is_some() {
// // Make 'already existing' connection descriptor
// let conn_desc = ConnectionDescriptor::new(
// dial_info.to_peer_address(),
// SocketAddress::from_socket_addr(preferred_local_address.unwrap()),
// );
// // Return the connection for this if we have it
// if let Some(conn) = self
// .arc
// .connection_table
// .get_connection_by_descriptor(conn_desc)
// {
// // Should not really happen, lets make sure we see this if it does
// log_net!(warn "== Returning existing connection in race: {:?}", conn_desc);
// return Ok(NetworkResult::Value(conn));
// }
// }
if net_res.is_value() || retry_count == 0 { if net_res.is_value() || retry_count == 0 {
// Successful new connection, return it // Successful new connection, return it
break net_res; break net_res;
@ -404,4 +418,12 @@ impl ConnectionManager {
let _ = sender.send_async(ConnectionManagerEvent::Dead(conn)).await; let _ = sender.send_async(ConnectionManagerEvent::Dead(conn)).await;
} }
} }
pub async fn debug_print(&self) -> String {
//let inner = self.arc.inner.lock();
format!(
"Connection Table:\n\n{}",
self.arc.connection_table.debug_print_table()
)
}
} }

View File

@ -72,6 +72,15 @@ impl ConnectionTable {
} }
} }
fn index_to_protocol(idx: usize) -> ProtocolType {
match idx {
0 => ProtocolType::TCP,
1 => ProtocolType::WS,
2 => ProtocolType::WSS,
_ => panic!("not a connection-oriented protocol"),
}
}
#[instrument(level = "trace", skip(self))] #[instrument(level = "trace", skip(self))]
pub async fn join(&self) { pub async fn join(&self) {
let mut unord = { let mut unord = {
@ -123,7 +132,7 @@ impl ConnectionTable {
false false
} }
#[instrument(level = "trace", skip(self), ret, err)] #[instrument(level = "trace", skip(self), ret)]
pub fn add_connection( pub fn add_connection(
&self, &self,
network_connection: NetworkConnection, network_connection: NetworkConnection,
@ -155,7 +164,7 @@ impl ConnectionTable {
} }
// Filter by ip for connection limits // Filter by ip for connection limits
let ip_addr = descriptor.remote_address().to_ip_addr(); let ip_addr = descriptor.remote_address().ip_addr();
match inner.address_filter.add_connection(ip_addr) { match inner.address_filter.add_connection(ip_addr) {
Ok(()) => {} Ok(()) => {}
Err(e) => { Err(e) => {
@ -175,10 +184,20 @@ impl ConnectionTable {
// then drop the least recently used connection // then drop the least recently used connection
let mut out_conn = None; let mut out_conn = None;
if inner.conn_by_id[protocol_index].len() > inner.max_connections[protocol_index] { if inner.conn_by_id[protocol_index].len() > inner.max_connections[protocol_index] {
if let Some((lruk, lru_conn)) = inner.conn_by_id[protocol_index].peek_lru() { while let Some((lruk, lru_conn)) = inner.conn_by_id[protocol_index].peek_lru() {
let lruk = *lruk; let lruk = *lruk;
log_net!(debug "connection lru out: {:?}", lru_conn);
out_conn = Some(Self::remove_connection_records(&mut *inner, lruk)); // Don't LRU protected connections
if lru_conn.is_protected() {
// Mark as recently used
log_net!(debug "== No LRU Out for PROTECTED connection: {} -> {}", lruk, lru_conn.debug_print(get_aligned_timestamp()));
inner.conn_by_id[protocol_index].get(&lruk);
continue;
}
log_net!(debug "== LRU Connection Killed: {} -> {}", lruk, lru_conn.debug_print(get_aligned_timestamp()));
out_conn = Some(Self::remove_connection_records(&mut inner, lruk));
break;
} }
} }
@ -218,11 +237,11 @@ impl ConnectionTable {
best_port: Option<u16>, best_port: Option<u16>,
remote: PeerAddress, remote: PeerAddress,
) -> Option<ConnectionHandle> { ) -> Option<ConnectionHandle> {
let mut inner = self.inner.lock(); let inner = &mut *self.inner.lock();
let all_ids_by_remote = inner.ids_by_remote.get(&remote)?; let all_ids_by_remote = inner.ids_by_remote.get(&remote)?;
let protocol_index = Self::protocol_to_index(remote.protocol_type()); let protocol_index = Self::protocol_to_index(remote.protocol_type());
if all_ids_by_remote.len() == 0 { if all_ids_by_remote.is_empty() {
// no connections // no connections
return None; return None;
} }
@ -234,11 +253,11 @@ impl ConnectionTable {
} }
// multiple connections, find the one that matches the best port, or the most recent // multiple connections, find the one that matches the best port, or the most recent
if let Some(best_port) = best_port { if let Some(best_port) = best_port {
for id in all_ids_by_remote.iter().copied() { for id in all_ids_by_remote {
let nc = inner.conn_by_id[protocol_index].peek(&id).unwrap(); let nc = inner.conn_by_id[protocol_index].peek(id).unwrap();
if let Some(local_addr) = nc.connection_descriptor().local() { if let Some(local_addr) = nc.connection_descriptor().local() {
if local_addr.port() == best_port { if local_addr.port() == best_port {
let nc = inner.conn_by_id[protocol_index].get(&id).unwrap(); let nc = inner.conn_by_id[protocol_index].get(id).unwrap();
return Some(nc.get_handle()); return Some(nc.get_handle());
} }
} }
@ -312,7 +331,7 @@ impl ConnectionTable {
} }
} }
// address_filter // address_filter
let ip_addr = remote.to_socket_addr().ip(); let ip_addr = remote.socket_addr().ip();
inner inner
.address_filter .address_filter
.remove_connection(ip_addr) .remove_connection(ip_addr)
@ -328,7 +347,26 @@ impl ConnectionTable {
if !inner.conn_by_id[protocol_index].contains_key(&id) { if !inner.conn_by_id[protocol_index].contains_key(&id) {
return None; return None;
} }
let conn = Self::remove_connection_records(&mut *inner, id); let conn = Self::remove_connection_records(&mut inner, id);
Some(conn) Some(conn)
} }
pub fn debug_print_table(&self) -> String {
let mut out = String::new();
let inner = self.inner.lock();
let cur_ts = get_aligned_timestamp();
for t in 0..inner.conn_by_id.len() {
out += &format!(
" {} Connections: ({}/{})\n",
Self::index_to_protocol(t),
inner.conn_by_id[t].len(),
inner.max_connections[t]
);
for (_, conn) in &inner.conn_by_id[t] {
out += &format!(" {}\n", conn.debug_print(cur_ts));
}
}
out
}
} }

View File

@ -46,7 +46,7 @@ use storage_manager::*;
#[cfg(target_arch = "wasm32")] #[cfg(target_arch = "wasm32")]
use wasm::*; use wasm::*;
#[cfg(target_arch = "wasm32")] #[cfg(target_arch = "wasm32")]
pub use wasm::{LOCAL_NETWORK_CAPABILITIES, MAX_CAPABILITIES, PUBLIC_INTERNET_CAPABILITIES}; pub use wasm::{/* LOCAL_NETWORK_CAPABILITIES, */ MAX_CAPABILITIES, PUBLIC_INTERNET_CAPABILITIES,};
//////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////////////////////
@ -61,16 +61,18 @@ pub const PUBLIC_ADDRESS_CHECK_TASK_INTERVAL_SECS: u32 = 60;
pub const PUBLIC_ADDRESS_INCONSISTENCY_TIMEOUT_US: TimestampDuration = pub const PUBLIC_ADDRESS_INCONSISTENCY_TIMEOUT_US: TimestampDuration =
TimestampDuration::new(300_000_000u64); // 5 minutes TimestampDuration::new(300_000_000u64); // 5 minutes
pub const PUBLIC_ADDRESS_INCONSISTENCY_PUNISHMENT_TIMEOUT_US: TimestampDuration = pub const PUBLIC_ADDRESS_INCONSISTENCY_PUNISHMENT_TIMEOUT_US: TimestampDuration =
TimestampDuration::new(3600_000_000u64); // 60 minutes TimestampDuration::new(3_600_000_000_u64); // 60 minutes
pub const ADDRESS_FILTER_TASK_INTERVAL_SECS: u32 = 60; pub const ADDRESS_FILTER_TASK_INTERVAL_SECS: u32 = 60;
pub const BOOT_MAGIC: &[u8; 4] = b"BOOT"; pub const BOOT_MAGIC: &[u8; 4] = b"BOOT";
#[derive(Copy, Clone, Debug, Default)] #[derive(Clone, Debug, Default)]
pub struct ProtocolConfig { pub struct ProtocolConfig {
pub outbound: ProtocolTypeSet, pub outbound: ProtocolTypeSet,
pub inbound: ProtocolTypeSet, pub inbound: ProtocolTypeSet,
pub family_global: AddressTypeSet, pub family_global: AddressTypeSet,
pub family_local: AddressTypeSet, pub family_local: AddressTypeSet,
pub public_internet_capabilities: Vec<FourCC>,
pub local_network_capabilities: Vec<FourCC>,
} }
// Things we get when we start up and go away when we shut down // Things we get when we start up and go away when we shut down
@ -261,7 +263,7 @@ impl NetworkManager {
where where
F: FnOnce(&VeilidConfigInner) -> R, F: FnOnce(&VeilidConfigInner) -> R,
{ {
f(&*self.unlocked_inner.config.get()) f(&self.unlocked_inner.config.get())
} }
pub fn storage_manager(&self) -> StorageManager { pub fn storage_manager(&self) -> StorageManager {
self.unlocked_inner.storage_manager.clone() self.unlocked_inner.storage_manager.clone()
@ -665,7 +667,7 @@ impl NetworkManager {
#[instrument(level = "trace", skip(self), err)] #[instrument(level = "trace", skip(self), err)]
pub async fn handle_signal( pub async fn handle_signal(
&self, &self,
connection_descriptor: ConnectionDescriptor, signal_connection_descriptor: ConnectionDescriptor,
signal_info: SignalInfo, signal_info: SignalInfo,
) -> EyreResult<NetworkResult<()>> { ) -> EyreResult<NetworkResult<()>> {
match signal_info { match signal_info {
@ -689,8 +691,9 @@ impl NetworkManager {
}; };
// Restrict reverse connection to same protocol as inbound signal // Restrict reverse connection to same protocol as inbound signal
let peer_nr = peer_nr let peer_nr = peer_nr.filtered_clone(NodeRefFilter::from(
.filtered_clone(NodeRefFilter::from(connection_descriptor.protocol_type())); signal_connection_descriptor.protocol_type(),
));
// Make a reverse connection to the peer and send the receipt to it // Make a reverse connection to the peer and send the receipt to it
rpc.rpc_call_return_receipt(Destination::direct(peer_nr), receipt) rpc.rpc_call_return_receipt(Destination::direct(peer_nr), receipt)
@ -891,7 +894,7 @@ impl NetworkManager {
data.len(), data.len(),
connection_descriptor connection_descriptor
); );
let remote_addr = connection_descriptor.remote_address().to_ip_addr(); let remote_addr = connection_descriptor.remote_address().ip_addr();
// Network accounting // Network accounting
self.stats_packet_rcvd(remote_addr, ByteCount::new(data.len() as u64)); self.stats_packet_rcvd(remote_addr, ByteCount::new(data.len() as u64));
@ -899,7 +902,7 @@ impl NetworkManager {
// If this is a zero length packet, just drop it, because these are used for hole punching // If this is a zero length packet, just drop it, because these are used for hole punching
// and possibly other low-level network connectivity tasks and will never require // and possibly other low-level network connectivity tasks and will never require
// more processing or forwarding // more processing or forwarding
if data.len() == 0 { if data.is_empty() {
return Ok(true); return Ok(true);
} }

View File

@ -14,6 +14,13 @@ pub enum DetectedDialInfo {
Detected(DialInfoDetail), Detected(DialInfoDetail),
} }
// Detection result of external address
#[derive(Clone, Debug)]
pub struct DetectionResult {
pub ddi: DetectedDialInfo,
pub external_address_types: AddressTypeSet,
}
// Result of checking external address // Result of checking external address
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
struct ExternalInfo { struct ExternalInfo {
@ -141,10 +148,8 @@ impl DiscoveryContext {
let dial_info_filter = DialInfoFilter::all() let dial_info_filter = DialInfoFilter::all()
.with_protocol_type(protocol_type) .with_protocol_type(protocol_type)
.with_address_type(address_type); .with_address_type(address_type);
let inbound_dial_info_entry_filter = RoutingTable::make_inbound_dial_info_entry_filter( let inbound_dial_info_entry_filter =
routing_domain, RoutingTable::make_inbound_dial_info_entry_filter(routing_domain, dial_info_filter);
dial_info_filter.clone(),
);
let disallow_relays_filter = Box::new( let disallow_relays_filter = Box::new(
move |rti: &RoutingTableInner, v: Option<Arc<BucketEntry>>| { move |rti: &RoutingTableInner, v: Option<Arc<BucketEntry>>| {
let v = v.unwrap(); let v = v.unwrap();
@ -199,7 +204,7 @@ impl DiscoveryContext {
let node = node.filtered_clone( let node = node.filtered_clone(
NodeRefFilter::new() NodeRefFilter::new()
.with_routing_domain(routing_domain) .with_routing_domain(routing_domain)
.with_dial_info_filter(dial_info_filter.clone()), .with_dial_info_filter(dial_info_filter),
); );
async move { async move {
if let Some(address) = this.request_public_address(node.clone()).await { if let Some(address) = this.request_public_address(node.clone()).await {
@ -219,9 +224,7 @@ impl DiscoveryContext {
let mut external_address_infos = Vec::new(); let mut external_address_infos = Vec::new();
for ni in 0..nodes.len() - 1 { for node in nodes.iter().take(nodes.len() - 1).cloned() {
let node = nodes[ni].clone();
let gpa_future = get_public_address_func(node); let gpa_future = get_public_address_func(node);
unord.push(gpa_future); unord.push(gpa_future);
@ -248,7 +251,9 @@ impl DiscoveryContext {
} }
} }
if external_address_infos.len() < 2 { if external_address_infos.len() < 2 {
log_net!(debug "not enough peers responded with an external address"); log_net!(debug "not enough peers responded with an external address for type {:?}:{:?}",
protocol_type,
address_type);
return false; return false;
} }
@ -277,15 +282,15 @@ impl DiscoveryContext {
node_ref.set_filter(None); node_ref.set_filter(None);
// ask the node to send us a dial info validation receipt // ask the node to send us a dial info validation receipt
let out = rpc_processor
rpc_processor
.rpc_call_validate_dial_info(node_ref.clone(), dial_info, redirect) .rpc_call_validate_dial_info(node_ref.clone(), dial_info, redirect)
.await .await
.map_err(logthru_net!( .map_err(logthru_net!(
"failed to send validate_dial_info to {:?}", "failed to send validate_dial_info to {:?}",
node_ref node_ref
)) ))
.unwrap_or(false); .unwrap_or(false)
out
} }
#[instrument(level = "trace", skip(self), ret)] #[instrument(level = "trace", skip(self), ret)]
@ -307,9 +312,14 @@ impl DiscoveryContext {
// Attempt a port mapping. If this doesn't succeed, it's not going to // Attempt a port mapping. If this doesn't succeed, it's not going to
let Some(mapped_external_address) = igd_manager let Some(mapped_external_address) = igd_manager
.map_any_port(low_level_protocol_type, address_type, local_port, Some(external_1.address.to_ip_addr())) .map_any_port(
.await else low_level_protocol_type,
{ address_type,
local_port,
Some(external_1.address.ip_addr()),
)
.await
else {
return None; return None;
}; };
@ -377,28 +387,34 @@ impl DiscoveryContext {
#[instrument(level = "trace", skip(self), ret)] #[instrument(level = "trace", skip(self), ret)]
async fn protocol_process_no_nat( async fn protocol_process_no_nat(
&self, &self,
unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectedDialInfo>>>, unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectionResult>>>,
) { ) {
let external_1 = self.inner.lock().external_1.as_ref().unwrap().clone(); let external_1 = self.inner.lock().external_1.as_ref().unwrap().clone();
let this = self.clone(); let this = self.clone();
let do_no_nat_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = Box::pin(async move { let do_no_nat_fut: SendPinBoxFuture<Option<DetectionResult>> = Box::pin(async move {
// Do a validate_dial_info on the external address from a redirected node // Do a validate_dial_info on the external address from a redirected node
if this if this
.validate_dial_info(external_1.node.clone(), external_1.dial_info.clone(), true) .validate_dial_info(external_1.node.clone(), external_1.dial_info.clone(), true)
.await .await
{ {
// Add public dial info with Direct dialinfo class // Add public dial info with Direct dialinfo class
Some(DetectedDialInfo::Detected(DialInfoDetail { Some(DetectionResult {
dial_info: external_1.dial_info.clone(), ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::Direct, dial_info: external_1.dial_info.clone(),
})) class: DialInfoClass::Direct,
}),
external_address_types: AddressTypeSet::only(external_1.address.address_type()),
})
} else { } else {
// Add public dial info with Blocked dialinfo class // Add public dial info with Blocked dialinfo class
Some(DetectedDialInfo::Detected(DialInfoDetail { Some(DetectionResult {
dial_info: external_1.dial_info.clone(), ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::Blocked, dial_info: external_1.dial_info.clone(),
})) class: DialInfoClass::Blocked,
}),
external_address_types: AddressTypeSet::only(external_1.address.address_type()),
})
} }
}); });
unord.push(do_no_nat_fut); unord.push(do_no_nat_fut);
@ -408,7 +424,7 @@ impl DiscoveryContext {
#[instrument(level = "trace", skip(self), ret)] #[instrument(level = "trace", skip(self), ret)]
async fn protocol_process_nat( async fn protocol_process_nat(
&self, &self,
unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectedDialInfo>>>, unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectionResult>>>,
) { ) {
// Get the external dial info for our use here // Get the external dial info for our use here
let (external_1, external_2) = { let (external_1, external_2) = {
@ -420,9 +436,18 @@ impl DiscoveryContext {
}; };
// If we have two different external addresses, then this is a symmetric NAT // If we have two different external addresses, then this is a symmetric NAT
if external_2.address != external_1.address { if external_2.address.address() != external_1.address.address() {
let do_symmetric_nat_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = let do_symmetric_nat_fut: SendPinBoxFuture<Option<DetectionResult>> =
Box::pin(async move { Some(DetectedDialInfo::SymmetricNAT) }); Box::pin(async move {
Some(DetectionResult {
ddi: DetectedDialInfo::SymmetricNAT,
external_address_types: AddressTypeSet::only(
external_1.address.address_type(),
) | AddressTypeSet::only(
external_2.address.address_type(),
),
})
});
unord.push(do_symmetric_nat_fut); unord.push(do_symmetric_nat_fut);
return; return;
} }
@ -437,7 +462,7 @@ impl DiscoveryContext {
{ {
if external_1.dial_info.port() != local_port { if external_1.dial_info.port() != local_port {
let c_external_1 = external_1.clone(); let c_external_1 = external_1.clone();
let do_manual_map_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = let do_manual_map_fut: SendPinBoxFuture<Option<DetectionResult>> =
Box::pin(async move { Box::pin(async move {
// Do a validate_dial_info on the external address, but with the same port as the local port of local interface, from a redirected node // Do a validate_dial_info on the external address, but with the same port as the local port of local interface, from a redirected node
// This test is to see if a node had manual port forwarding done with the same port number as the local listener // This test is to see if a node had manual port forwarding done with the same port number as the local listener
@ -454,10 +479,15 @@ impl DiscoveryContext {
.await .await
{ {
// Add public dial info with Direct dialinfo class // Add public dial info with Direct dialinfo class
return Some(DetectedDialInfo::Detected(DialInfoDetail { return Some(DetectionResult {
dial_info: external_1_dial_info_with_local_port, ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::Direct, dial_info: external_1_dial_info_with_local_port,
})); class: DialInfoClass::Direct,
}),
external_address_types: AddressTypeSet::only(
c_external_1.address.address_type(),
),
});
} }
None None
@ -472,7 +502,7 @@ impl DiscoveryContext {
// Full Cone NAT Detection // Full Cone NAT Detection
/////////// ///////////
let this = self.clone(); let this = self.clone();
let do_nat_detect_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = Box::pin(async move { let do_nat_detect_fut: SendPinBoxFuture<Option<DetectionResult>> = Box::pin(async move {
let mut retry_count = { let mut retry_count = {
let c = this.unlocked_inner.net.config.get(); let c = this.unlocked_inner.net.config.get();
c.network.restricted_nat_retries c.network.restricted_nat_retries
@ -484,7 +514,7 @@ impl DiscoveryContext {
let c_this = this.clone(); let c_this = this.clone();
let c_external_1 = external_1.clone(); let c_external_1 = external_1.clone();
let do_full_cone_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = let do_full_cone_fut: SendPinBoxFuture<Option<DetectionResult>> =
Box::pin(async move { Box::pin(async move {
// Let's see what kind of NAT we have // Let's see what kind of NAT we have
// Does a redirected dial info validation from a different address and a random port find us? // Does a redirected dial info validation from a different address and a random port find us?
@ -499,10 +529,15 @@ impl DiscoveryContext {
// Yes, another machine can use the dial info directly, so Full Cone // Yes, another machine can use the dial info directly, so Full Cone
// Add public dial info with full cone NAT network class // Add public dial info with full cone NAT network class
return Some(DetectedDialInfo::Detected(DialInfoDetail { return Some(DetectionResult {
dial_info: c_external_1.dial_info, ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::FullConeNAT, dial_info: c_external_1.dial_info,
})); class: DialInfoClass::FullConeNAT,
}),
external_address_types: AddressTypeSet::only(
c_external_1.address.address_type(),
),
});
} }
None None
}); });
@ -511,7 +546,7 @@ impl DiscoveryContext {
let c_this = this.clone(); let c_this = this.clone();
let c_external_1 = external_1.clone(); let c_external_1 = external_1.clone();
let c_external_2 = external_2.clone(); let c_external_2 = external_2.clone();
let do_restricted_cone_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = let do_restricted_cone_fut: SendPinBoxFuture<Option<DetectionResult>> =
Box::pin(async move { Box::pin(async move {
// We are restricted, determine what kind of restriction // We are restricted, determine what kind of restriction
@ -528,33 +563,43 @@ impl DiscoveryContext {
.await .await
{ {
// Got a reply from a non-default port, which means we're only address restricted // Got a reply from a non-default port, which means we're only address restricted
return Some(DetectedDialInfo::Detected(DialInfoDetail { return Some(DetectionResult {
dial_info: c_external_1.dial_info.clone(), ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::AddressRestrictedNAT, dial_info: c_external_1.dial_info.clone(),
})); class: DialInfoClass::AddressRestrictedNAT,
}),
external_address_types: AddressTypeSet::only(
c_external_1.address.address_type(),
),
});
} }
// Didn't get a reply from a non-default port, which means we are also port restricted // Didn't get a reply from a non-default port, which means we are also port restricted
Some(DetectedDialInfo::Detected(DialInfoDetail { Some(DetectionResult {
dial_info: c_external_1.dial_info.clone(), ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::PortRestrictedNAT, dial_info: c_external_1.dial_info.clone(),
})) class: DialInfoClass::PortRestrictedNAT,
}),
external_address_types: AddressTypeSet::only(
c_external_1.address.address_type(),
),
})
}); });
ord.push_back(do_restricted_cone_fut); ord.push_back(do_restricted_cone_fut);
// Return the first result we get // Return the first result we get
let mut some_ddi = None; let mut some_dr = None;
while let Some(res) = ord.next().await { while let Some(res) = ord.next().await {
if let Some(ddi) = res { if let Some(dr) = res {
some_ddi = Some(ddi); some_dr = Some(dr);
break; break;
} }
} }
if let Some(ddi) = some_ddi { if let Some(dr) = some_dr {
if let DetectedDialInfo::Detected(did) = &ddi { if let DetectedDialInfo::Detected(did) = &dr.ddi {
// If we got something better than restricted NAT or we're done retrying // If we got something better than restricted NAT or we're done retrying
if did.class < DialInfoClass::AddressRestrictedNAT || retry_count == 0 { if did.class < DialInfoClass::AddressRestrictedNAT || retry_count == 0 {
return Some(ddi); return Some(dr);
} }
} }
} }
@ -572,7 +617,7 @@ impl DiscoveryContext {
/// Add discovery futures to an unordered set that may detect dialinfo when they complete /// Add discovery futures to an unordered set that may detect dialinfo when they complete
pub async fn discover( pub async fn discover(
&self, &self,
unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectedDialInfo>>>, unord: &mut FuturesUnordered<SendPinBoxFuture<Option<DetectionResult>>>,
) { ) {
let enable_upnp = { let enable_upnp = {
let c = self.unlocked_inner.net.config.get(); let c = self.unlocked_inner.net.config.get();
@ -590,16 +635,21 @@ impl DiscoveryContext {
/////////// ///////////
if enable_upnp { if enable_upnp {
let this = self.clone(); let this = self.clone();
let do_mapped_fut: SendPinBoxFuture<Option<DetectedDialInfo>> = Box::pin(async move { let do_mapped_fut: SendPinBoxFuture<Option<DetectionResult>> = Box::pin(async move {
// Attempt a port mapping via all available and enabled mechanisms // Attempt a port mapping via all available and enabled mechanisms
// Try this before the direct mapping in the event that we are restarting // Try this before the direct mapping in the event that we are restarting
// and may not have recorded a mapping created the last time // and may not have recorded a mapping created the last time
if let Some(external_mapped_dial_info) = this.try_upnp_port_mapping().await { if let Some(external_mapped_dial_info) = this.try_upnp_port_mapping().await {
// Got a port mapping, let's use it // Got a port mapping, let's use it
return Some(DetectedDialInfo::Detected(DialInfoDetail { return Some(DetectionResult {
dial_info: external_mapped_dial_info.clone(), ddi: DetectedDialInfo::Detected(DialInfoDetail {
class: DialInfoClass::Mapped, dial_info: external_mapped_dial_info.clone(),
})); class: DialInfoClass::Mapped,
}),
external_address_types: AddressTypeSet::only(
external_mapped_dial_info.address_type(),
),
});
} }
None None
}); });

View File

@ -184,7 +184,7 @@ impl IGDManager {
let mut found = None; let mut found = None;
for (pmk, pmv) in &inner.port_maps { for (pmk, pmv) in &inner.port_maps {
if pmk.llpt == llpt && pmk.at == at && pmv.mapped_port == mapped_port { if pmk.llpt == llpt && pmk.at == at && pmv.mapped_port == mapped_port {
found = Some(pmk.clone()); found = Some(*pmk);
break; break;
} }
} }
@ -192,7 +192,7 @@ impl IGDManager {
let _pmv = inner.port_maps.remove(&pmk).expect("key found but remove failed"); let _pmv = inner.port_maps.remove(&pmk).expect("key found but remove failed");
// Find gateway // Find gateway
let gw = Self::find_gateway(&mut *inner, at)?; let gw = Self::find_gateway(&mut inner, at)?;
// Unmap port // Unmap port
match gw.remove_port(convert_llpt(llpt), mapped_port) { match gw.remove_port(convert_llpt(llpt), mapped_port) {
@ -230,10 +230,10 @@ impl IGDManager {
} }
// Get local ip address // Get local ip address
let local_ip = Self::find_local_ip(&mut *inner, at)?; let local_ip = Self::find_local_ip(&mut inner, at)?;
// Find gateway // Find gateway
let gw = Self::find_gateway(&mut *inner, at)?; let gw = Self::find_gateway(&mut inner, at)?;
// Get external address // Get external address
let ext_ip = match gw.get_external_ip() { let ext_ip = match gw.get_external_ip() {
@ -245,16 +245,12 @@ impl IGDManager {
}; };
// Ensure external IP matches address type // Ensure external IP matches address type
if ext_ip.is_ipv4() { if ext_ip.is_ipv4() && at != AddressType::IPV4 {
if at != AddressType::IPV4 { log_net!(debug "mismatched ip address type from igd, wanted v4, got v6");
log_net!(debug "mismatched ip address type from igd, wanted v4, got v6"); return None;
return None; } else if ext_ip.is_ipv6() && at != AddressType::IPV6 {
} log_net!(debug "mismatched ip address type from igd, wanted v6, got v4");
} else if ext_ip.is_ipv6() { return None;
if at != AddressType::IPV6 {
log_net!(debug "mismatched ip address type from igd, wanted v6, got v4");
return None;
}
} }
if let Some(expected_external_address) = expected_external_address { if let Some(expected_external_address) = expected_external_address {

View File

@ -421,7 +421,7 @@ impl Network {
if self if self
.network_manager() .network_manager()
.address_filter() .address_filter()
.is_ip_addr_punished(dial_info.address().to_ip_addr()) .is_ip_addr_punished(dial_info.address().ip_addr())
{ {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
@ -462,7 +462,7 @@ impl Network {
} }
// Network accounting // Network accounting
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
Ok(NetworkResult::Value(())) Ok(NetworkResult::Value(()))
}) })
@ -491,7 +491,7 @@ impl Network {
if self if self
.network_manager() .network_manager()
.address_filter() .address_filter()
.is_ip_addr_punished(dial_info.address().to_ip_addr()) .is_ip_addr_punished(dial_info.address().ip_addr())
{ {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
@ -507,7 +507,7 @@ impl Network {
.await .await
.wrap_err("send message failure")?); .wrap_err("send message failure")?);
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
// receive single response // receive single response
let mut out = vec![0u8; MAX_MESSAGE_SIZE]; let mut out = vec![0u8; MAX_MESSAGE_SIZE];
@ -519,7 +519,7 @@ impl Network {
.into_network_result()) .into_network_result())
.wrap_err("recv_message failure")?; .wrap_err("recv_message failure")?;
let recv_socket_addr = recv_addr.remote_address().to_socket_addr(); let recv_socket_addr = recv_addr.remote_address().socket_addr();
self.network_manager() self.network_manager()
.stats_packet_rcvd(recv_socket_addr.ip(), ByteCount::new(recv_len as u64)); .stats_packet_rcvd(recv_socket_addr.ip(), ByteCount::new(recv_len as u64));
@ -552,7 +552,7 @@ impl Network {
network_result_try!(pnc.send(data).await.wrap_err("send failure")?); network_result_try!(pnc.send(data).await.wrap_err("send failure")?);
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
let out = let out =
network_result_try!(network_result_try!(timeout(timeout_ms, pnc.recv()) network_result_try!(network_result_try!(timeout(timeout_ms, pnc.recv())
@ -560,10 +560,8 @@ impl Network {
.into_network_result()) .into_network_result())
.wrap_err("recv failure")?); .wrap_err("recv failure")?);
self.network_manager().stats_packet_rcvd( self.network_manager()
dial_info.to_ip_addr(), .stats_packet_rcvd(dial_info.ip_addr(), ByteCount::new(out.len() as u64));
ByteCount::new(out.len() as u64),
);
Ok(NetworkResult::Value(out)) Ok(NetworkResult::Value(out))
} }
@ -583,10 +581,10 @@ impl Network {
// Handle connectionless protocol // Handle connectionless protocol
if descriptor.protocol_type() == ProtocolType::UDP { if descriptor.protocol_type() == ProtocolType::UDP {
// send over the best udp socket we have bound since UDP is not connection oriented // send over the best udp socket we have bound since UDP is not connection oriented
let peer_socket_addr = descriptor.remote().to_socket_addr(); let peer_socket_addr = descriptor.remote().socket_addr();
if let Some(ph) = self.find_best_udp_protocol_handler( if let Some(ph) = self.find_best_udp_protocol_handler(
&peer_socket_addr, &peer_socket_addr,
&descriptor.local().map(|sa| sa.to_socket_addr()), &descriptor.local().map(|sa| sa.socket_addr()),
) { ) {
network_result_value_or_log!(ph.clone() network_result_value_or_log!(ph.clone()
.send_message(data.clone(), peer_socket_addr) .send_message(data.clone(), peer_socket_addr)
@ -612,7 +610,7 @@ impl Network {
ConnectionHandleSendResult::Sent => { ConnectionHandleSendResult::Sent => {
// Network accounting // Network accounting
self.network_manager().stats_packet_sent( self.network_manager().stats_packet_sent(
descriptor.remote().to_socket_addr().ip(), descriptor.remote().socket_addr().ip(),
ByteCount::new(data_len as u64), ByteCount::new(data_len as u64),
); );
@ -676,7 +674,7 @@ impl Network {
// Network accounting // Network accounting
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
Ok(NetworkResult::value(connection_descriptor)) Ok(NetworkResult::value(connection_descriptor))
}) })
@ -686,7 +684,7 @@ impl Network {
///////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////
pub fn get_protocol_config(&self) -> ProtocolConfig { pub fn get_protocol_config(&self) -> ProtocolConfig {
self.inner.lock().protocol_config self.inner.lock().protocol_config.clone()
} }
#[instrument(level = "debug", err, skip_all)] #[instrument(level = "debug", err, skip_all)]
@ -701,7 +699,7 @@ impl Network {
.with_interfaces(|interfaces| { .with_interfaces(|interfaces| {
trace!("interfaces: {:#?}", interfaces); trace!("interfaces: {:#?}", interfaces);
for (_name, intf) in interfaces { for intf in interfaces.values() {
// Skip networks that we should never encounter // Skip networks that we should never encounter
if intf.is_loopback() || !intf.is_running() { if intf.is_loopback() || !intf.is_running() {
continue; continue;
@ -792,14 +790,33 @@ impl Network {
family_local.insert(AddressType::IPV6); family_local.insert(AddressType::IPV6);
} }
// set up the routing table's network config
// if we have static public dialinfo, upgrade our network class
let public_internet_capabilities = {
PUBLIC_INTERNET_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
let local_network_capabilities = {
LOCAL_NETWORK_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
ProtocolConfig { ProtocolConfig {
outbound, outbound,
inbound, inbound,
family_global, family_global,
family_local, family_local,
public_internet_capabilities,
local_network_capabilities,
} }
}; };
inner.protocol_config = protocol_config; inner.protocol_config = protocol_config.clone();
protocol_config protocol_config
}; };
@ -837,36 +854,17 @@ impl Network {
// that we have ports available to us // that we have ports available to us
self.free_bound_first_ports(); self.free_bound_first_ports();
// set up the routing table's network config
// if we have static public dialinfo, upgrade our network class
let public_internet_capabilities = {
let c = self.config.get();
PUBLIC_INTERNET_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
let local_network_capabilities = {
let c = self.config.get();
LOCAL_NETWORK_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
editor_public_internet.setup_network( editor_public_internet.setup_network(
protocol_config.outbound, protocol_config.outbound,
protocol_config.inbound, protocol_config.inbound,
protocol_config.family_global, protocol_config.family_global,
public_internet_capabilities, protocol_config.public_internet_capabilities,
); );
editor_local_network.setup_network( editor_local_network.setup_network(
protocol_config.outbound, protocol_config.outbound,
protocol_config.inbound, protocol_config.inbound,
protocol_config.family_local, protocol_config.family_local,
local_network_capabilities, protocol_config.local_network_capabilities,
); );
let detect_address_changes = { let detect_address_changes = {
let c = self.config.get(); let c = self.config.get();
@ -1019,14 +1017,27 @@ impl Network {
let routing_table = self.routing_table(); let routing_table = self.routing_table();
let rth = routing_table.get_routing_table_health(); let rth = routing_table.get_routing_table_health();
// Need at least two entries to do this // We want at least two live entries per crypto kind before we start doing this (bootstrap)
if rth.unreliable_entry_count + rth.reliable_entry_count >= 2 { let mut has_at_least_two = true;
for ck in VALID_CRYPTO_KINDS {
if rth
.live_entry_counts
.get(&(RoutingDomain::PublicInternet, ck))
.copied()
.unwrap_or_default()
< 2
{
has_at_least_two = false;
break;
}
}
if has_at_least_two {
self.unlocked_inner.update_network_class_task.tick().await?; self.unlocked_inner.update_network_class_task.tick().await?;
} }
} }
// If we aren't resetting the network already, // Check our network interfaces to see if they have changed
// check our network interfaces to see if they have changed
if !self.needs_restart() { if !self.needs_restart() {
self.unlocked_inner.network_interfaces_task.tick().await?; self.unlocked_inner.network_interfaces_task.tick().await?;
} }

View File

@ -11,19 +11,6 @@ impl Network {
.get_network_class(RoutingDomain::PublicInternet) .get_network_class(RoutingDomain::PublicInternet)
.unwrap_or_default(); .unwrap_or_default();
// get existing dial info into table by protocol/address type
let mut existing_dial_info = BTreeMap::<(ProtocolType, AddressType), DialInfoDetail>::new();
for did in self.routing_table().all_filtered_dial_info_details(
RoutingDomain::PublicInternet.into(),
&DialInfoFilter::all(),
) {
// Only need to keep one per pt/at pair, since they will all have the same dialinfoclass
existing_dial_info.insert(
(did.dial_info.protocol_type(), did.dial_info.address_type()),
did,
);
}
match ddi { match ddi {
DetectedDialInfo::SymmetricNAT => { DetectedDialInfo::SymmetricNAT => {
// If we get any symmetric nat dialinfo, this whole network class is outbound only, // If we get any symmetric nat dialinfo, this whole network class is outbound only,
@ -35,10 +22,24 @@ impl Network {
editor.clear_dial_info_details(None, None); editor.clear_dial_info_details(None, None);
editor.set_network_class(Some(NetworkClass::OutboundOnly)); editor.set_network_class(Some(NetworkClass::OutboundOnly));
editor.clear_relay_node();
editor.commit(true).await; editor.commit(true).await;
} }
} }
DetectedDialInfo::Detected(did) => { DetectedDialInfo::Detected(did) => {
// get existing dial info into table by protocol/address type
let mut existing_dial_info =
BTreeMap::<(ProtocolType, AddressType), DialInfoDetail>::new();
for did in self.routing_table().all_filtered_dial_info_details(
RoutingDomain::PublicInternet.into(),
&DialInfoFilter::all(),
) {
// Only need to keep one per pt/at pair, since they will all have the same dialinfoclass
existing_dial_info.insert(
(did.dial_info.protocol_type(), did.dial_info.address_type()),
did,
);
}
// We got a dial info, upgrade everything unless we are fixed to outbound only due to a symmetric nat // We got a dial info, upgrade everything unless we are fixed to outbound only due to a symmetric nat
if !matches!(existing_network_class, NetworkClass::OutboundOnly) { if !matches!(existing_network_class, NetworkClass::OutboundOnly) {
// Get existing dial info for protocol/address type combination // Get existing dial info for protocol/address type combination
@ -103,7 +104,7 @@ impl Network {
// Figure out if we can optimize TCP/WS checking since they are often on the same port // Figure out if we can optimize TCP/WS checking since they are often on the same port
let (protocol_config, tcp_same_port) = { let (protocol_config, tcp_same_port) = {
let inner = self.inner.lock(); let inner = self.inner.lock();
let protocol_config = inner.protocol_config; let protocol_config = inner.protocol_config.clone();
let tcp_same_port = if protocol_config.inbound.contains(ProtocolType::TCP) let tcp_same_port = if protocol_config.inbound.contains(ProtocolType::TCP)
&& protocol_config.inbound.contains(ProtocolType::WS) && protocol_config.inbound.contains(ProtocolType::WS)
{ {
@ -125,9 +126,16 @@ impl Network {
.collect(); .collect();
// Clear public dialinfo and network class in prep for discovery // Clear public dialinfo and network class in prep for discovery
let mut editor = self let mut editor = self
.routing_table() .routing_table()
.edit_routing_domain(RoutingDomain::PublicInternet); .edit_routing_domain(RoutingDomain::PublicInternet);
editor.setup_network(
protocol_config.outbound,
protocol_config.inbound,
protocol_config.family_global,
protocol_config.public_internet_capabilities.clone(),
);
editor.clear_dial_info_details(None, None); editor.clear_dial_info_details(None, None);
editor.set_network_class(None); editor.set_network_class(None);
editor.clear_relay_node(); editor.clear_relay_node();
@ -226,14 +234,18 @@ impl Network {
} }
// Wait for all discovery futures to complete and apply discoverycontexts // Wait for all discovery futures to complete and apply discoverycontexts
let mut all_address_types = AddressTypeSet::new();
loop { loop {
match unord.next().timeout_at(stop_token.clone()).await { match unord.next().timeout_at(stop_token.clone()).await {
Ok(Some(Some(ddi))) => { Ok(Some(Some(dr))) => {
// Found some new dial info for this protocol/address combination // Found some new dial info for this protocol/address combination
self.update_with_detected_dial_info(ddi.clone()).await?; self.update_with_detected_dial_info(dr.ddi.clone()).await?;
// Add the external address kinds to the set we've seen
all_address_types |= dr.external_address_types;
// Add WS dialinfo as well if it is on the same port as TCP // Add WS dialinfo as well if it is on the same port as TCP
if let DetectedDialInfo::Detected(did) = &ddi { if let DetectedDialInfo::Detected(did) = &dr.ddi {
if did.dial_info.protocol_type() == ProtocolType::TCP && tcp_same_port { if did.dial_info.protocol_type() == ProtocolType::TCP && tcp_same_port {
// Make WS dialinfo as well with same socket address as TCP // Make WS dialinfo as well with same socket address as TCP
let ws_ddi = DetectedDialInfo::Detected(DialInfoDetail { let ws_ddi = DetectedDialInfo::Detected(DialInfoDetail {
@ -262,7 +274,18 @@ impl Network {
} }
} }
// All done, see if things changed // All done
// Set the address types we've seen
editor.setup_network(
protocol_config.outbound,
protocol_config.inbound,
all_address_types,
protocol_config.public_internet_capabilities,
);
editor.commit(true).await;
// See if the dial info changed
let new_public_dial_info: HashSet<DialInfoDetail> = self let new_public_dial_info: HashSet<DialInfoDetail> = self
.routing_table() .routing_table()
.all_filtered_dial_info_details( .all_filtered_dial_info_details(

View File

@ -33,7 +33,7 @@ impl Network {
let server_config = self let server_config = self
.load_server_config() .load_server_config()
.wrap_err("Couldn't create TLS configuration")?; .wrap_err("Couldn't create TLS configuration")?;
let acceptor = TlsAcceptor::from(Arc::new(server_config)); let acceptor = TlsAcceptor::from(server_config);
self.inner.lock().tls_acceptor = Some(acceptor.clone()); self.inner.lock().tls_acceptor = Some(acceptor.clone());
Ok(acceptor) Ok(acceptor)
} }

View File

@ -68,7 +68,7 @@ impl Network {
Ok(Ok((size, descriptor))) => { Ok(Ok((size, descriptor))) => {
// Network accounting // Network accounting
network_manager.stats_packet_rcvd( network_manager.stats_packet_rcvd(
descriptor.remote_address().to_ip_addr(), descriptor.remote_address().ip_addr(),
ByteCount::new(size as u64), ByteCount::new(size as u64),
); );

View File

@ -24,7 +24,7 @@ impl ProtocolNetworkConnection {
timeout_ms: u32, timeout_ms: u32,
address_filter: AddressFilter, address_filter: AddressFilter,
) -> io::Result<NetworkResult<ProtocolNetworkConnection>> { ) -> io::Result<NetworkResult<ProtocolNetworkConnection>> {
if address_filter.is_ip_addr_punished(dial_info.address().to_ip_addr()) { if address_filter.is_ip_addr_punished(dial_info.address().ip_addr()) {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
match dial_info.protocol_type() { match dial_info.protocol_type() {

View File

@ -19,7 +19,7 @@ impl RawTcpNetworkConnection {
} }
pub fn descriptor(&self) -> ConnectionDescriptor { pub fn descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
// #[instrument(level = "trace", err, skip(self))] // #[instrument(level = "trace", err, skip(self))]
@ -132,11 +132,12 @@ impl RawTcpProtocolHandler {
) -> io::Result<Option<ProtocolNetworkConnection>> { ) -> io::Result<Option<ProtocolNetworkConnection>> {
log_net!("TCP: on_accept_async: enter"); log_net!("TCP: on_accept_async: enter");
let mut peekbuf: [u8; PEEK_DETECT_LEN] = [0u8; PEEK_DETECT_LEN]; let mut peekbuf: [u8; PEEK_DETECT_LEN] = [0u8; PEEK_DETECT_LEN];
if let Err(_) = timeout( if (timeout(
self.connection_initial_timeout_ms, self.connection_initial_timeout_ms,
ps.peek_exact(&mut peekbuf), ps.peek_exact(&mut peekbuf),
) )
.await .await)
.is_err()
{ {
return Ok(None); return Ok(None);
} }

View File

@ -79,9 +79,9 @@ impl RawUdpProtocolHandler {
}; };
#[cfg(feature = "verbose-tracing")] #[cfg(feature = "verbose-tracing")]
tracing::Span::current().record("ret.len", &message_len); tracing::Span::current().record("ret.len", message_len);
#[cfg(feature = "verbose-tracing")] #[cfg(feature = "verbose-tracing")]
tracing::Span::current().record("ret.descriptor", &format!("{:?}", descriptor).as_str()); tracing::Span::current().record("ret.descriptor", format!("{:?}", descriptor).as_str());
Ok((message_len, descriptor)) Ok((message_len, descriptor))
} }
@ -134,7 +134,7 @@ impl RawUdpProtocolHandler {
); );
#[cfg(feature = "verbose-tracing")] #[cfg(feature = "verbose-tracing")]
tracing::Span::current().record("ret.descriptor", &format!("{:?}", descriptor).as_str()); tracing::Span::current().record("ret.descriptor", format!("{:?}", descriptor).as_str());
Ok(NetworkResult::value(descriptor)) Ok(NetworkResult::value(descriptor))
} }
@ -143,7 +143,7 @@ impl RawUdpProtocolHandler {
socket_addr: &SocketAddr, socket_addr: &SocketAddr,
) -> io::Result<RawUdpProtocolHandler> { ) -> io::Result<RawUdpProtocolHandler> {
// get local wildcard address for bind // get local wildcard address for bind
let local_socket_addr = compatible_unspecified_socket_addr(&socket_addr); let local_socket_addr = compatible_unspecified_socket_addr(socket_addr);
let socket = UdpSocket::bind(local_socket_addr).await?; let socket = UdpSocket::bind(local_socket_addr).await?;
Ok(RawUdpProtocolHandler::new(Arc::new(socket), None)) Ok(RawUdpProtocolHandler::new(Arc::new(socket), None))
} }

View File

@ -1,10 +1,22 @@
use super::*; use super::*;
use async_tls::TlsConnector; use async_tls::TlsConnector;
use async_tungstenite::tungstenite::handshake::server::{
Callback, ErrorResponse, Request, Response,
};
use async_tungstenite::tungstenite::http::StatusCode;
use async_tungstenite::tungstenite::protocol::Message; use async_tungstenite::tungstenite::protocol::Message;
use async_tungstenite::{accept_async, client_async, WebSocketStream}; use async_tungstenite::{accept_hdr_async, client_async, WebSocketStream};
use futures_util::{AsyncRead, AsyncWrite, SinkExt}; use futures_util::{AsyncRead, AsyncWrite, SinkExt};
use sockets::*; use sockets::*;
/// Maximum number of websocket request headers to permit
const MAX_WS_HEADERS: usize = 24;
/// Maximum size of any one specific websocket header
const MAX_WS_HEADER_LENGTH: usize = 512;
/// Maximum total size of headers and request including newlines
const MAX_WS_BEFORE_BODY: usize = 2048;
cfg_if! { cfg_if! {
if #[cfg(feature="rt-async-std")] { if #[cfg(feature="rt-async-std")] {
pub type WebsocketNetworkConnectionWSS = pub type WebsocketNetworkConnectionWSS =
@ -62,7 +74,7 @@ where
} }
pub fn descriptor(&self) -> ConnectionDescriptor { pub fn descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
// #[instrument(level = "trace", err, skip(self))] // #[instrument(level = "trace", err, skip(self))]
@ -180,29 +192,57 @@ impl WebsocketProtocolHandler {
log_net!("WS: on_accept_async: enter"); log_net!("WS: on_accept_async: enter");
let request_path_len = self.arc.request_path.len() + 2; let request_path_len = self.arc.request_path.len() + 2;
let mut peekbuf: Vec<u8> = vec![0u8; request_path_len]; let mut peek_buf = [0u8; MAX_WS_BEFORE_BODY];
if let Err(_) = timeout( let peek_len = match timeout(
self.arc.connection_initial_timeout_ms, self.arc.connection_initial_timeout_ms,
ps.peek_exact(&mut peekbuf), ps.peek(&mut peek_buf),
) )
.await .await
{ {
Err(_) => {
// Timeout
return Ok(None);
}
Ok(Err(_)) => {
// Peek error
return Ok(None);
}
Ok(Ok(v)) => v,
};
// If we can't peek at least our request path, then fail out
if peek_len < request_path_len {
return Ok(None); return Ok(None);
} }
// Check for websocket path // Check for websocket path
let matches_path = &peekbuf[0..request_path_len - 2] == self.arc.request_path.as_slice() let matches_path = &peek_buf[0..request_path_len - 2] == self.arc.request_path.as_slice()
&& (peekbuf[request_path_len - 2] == b' ' && (peek_buf[request_path_len - 2] == b' '
|| (peekbuf[request_path_len - 2] == b'/' || (peek_buf[request_path_len - 2] == b'/'
&& peekbuf[request_path_len - 1] == b' ')); && peek_buf[request_path_len - 1] == b' '));
if !matches_path { if !matches_path {
return Ok(None); return Ok(None);
} }
let ws_stream = accept_async(ps) // Check for double-CRLF indicating end of headers
.await // if we don't find the end of the headers within MAX_WS_BEFORE_BODY
.map_err(|e| io_error_other!(format!("failed websockets handshake: {}", e)))?; // then we should bail, as this could be an attack or at best, something malformed
// Yes, this restricts our handling to CRLF-conforming HTTP implementations
// This check could be loosened if necessary, but until we have a reason to do so
// a stricter interpretation of HTTP is possible and desirable to reduce attack surface
if !peek_buf.windows(4).any(|w| w == b"\r\n\r\n") {
return Ok(None);
}
let ws_stream = match accept_hdr_async(ps, self.clone()).await {
Ok(v) => v,
Err(e) => {
log_net!(debug "failed websockets handshake: {}", e);
return Ok(None);
}
};
// Wrap the websocket in a NetworkConnection and register it // Wrap the websocket in a NetworkConnection and register it
let protocol_type = if self.arc.tls { let protocol_type = if self.arc.tls {
@ -266,7 +306,7 @@ impl WebsocketProtocolHandler {
// Make our connection descriptor // Make our connection descriptor
let descriptor = ConnectionDescriptor::new( let descriptor = ConnectionDescriptor::new(
dial_info.to_peer_address(), dial_info.peer_address(),
SocketAddress::from_socket_addr(actual_local_addr), SocketAddress::from_socket_addr(actual_local_addr),
); );
@ -292,6 +332,23 @@ impl WebsocketProtocolHandler {
} }
} }
impl Callback for WebsocketProtocolHandler {
fn on_request(self, request: &Request, response: Response) -> Result<Response, ErrorResponse> {
// Cap the number of headers total and limit the size of all headers
if request.headers().len() > MAX_WS_HEADERS
|| request
.headers()
.iter()
.any(|h| (h.0.as_str().len() + h.1.as_bytes().len()) > MAX_WS_HEADER_LENGTH)
{
let mut error_response = ErrorResponse::new(None);
*error_response.status_mut() = StatusCode::NOT_FOUND;
return Err(error_response);
}
Ok(response)
}
}
impl ProtocolAcceptHandler for WebsocketProtocolHandler { impl ProtocolAcceptHandler for WebsocketProtocolHandler {
fn on_accept( fn on_accept(
&self, &self,

View File

@ -312,7 +312,7 @@ impl Network {
// if no other public address is specified // if no other public address is specified
if !detect_address_changes if !detect_address_changes
&& public_address.is_none() && public_address.is_none()
&& routing_table.ensure_dial_info_is_valid(RoutingDomain::PublicInternet, &di) && routing_table.ensure_dial_info_is_valid(RoutingDomain::PublicInternet, di)
{ {
editor_public_internet.register_dial_info(di.clone(), DialInfoClass::Direct)?; editor_public_internet.register_dial_info(di.clone(), DialInfoClass::Direct)?;
static_public = true; static_public = true;
@ -449,7 +449,7 @@ impl Network {
for socket_address in socket_addresses { for socket_address in socket_addresses {
// Skip addresses we already did // Skip addresses we already did
if registered_addresses.contains(&socket_address.to_ip_addr()) { if registered_addresses.contains(&socket_address.ip_addr()) {
continue; continue;
} }
// Build dial info request url // Build dial info request url
@ -628,7 +628,7 @@ impl Network {
} }
// Register interface dial info // Register interface dial info
editor_local_network.register_dial_info(di.clone(), DialInfoClass::Direct)?; editor_local_network.register_dial_info(di.clone(), DialInfoClass::Direct)?;
registered_addresses.insert(socket_address.to_ip_addr()); registered_addresses.insert(socket_address.ip_addr());
} }
// Add static public dialinfo if it's configured // Add static public dialinfo if it's configured

View File

@ -52,7 +52,7 @@ pub struct DummyNetworkConnection {
impl DummyNetworkConnection { impl DummyNetworkConnection {
pub fn descriptor(&self) -> ConnectionDescriptor { pub fn descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
// pub fn close(&self) -> io::Result<()> { // pub fn close(&self) -> io::Result<()> {
// Ok(()) // Ok(())
@ -94,6 +94,7 @@ pub struct NetworkConnection {
stats: Arc<Mutex<NetworkConnectionStats>>, stats: Arc<Mutex<NetworkConnectionStats>>,
sender: flume::Sender<(Option<Id>, Vec<u8>)>, sender: flume::Sender<(Option<Id>, Vec<u8>)>,
stop_source: Option<StopSource>, stop_source: Option<StopSource>,
protected: bool,
} }
impl NetworkConnection { impl NetworkConnection {
@ -112,6 +113,7 @@ impl NetworkConnection {
})), })),
sender, sender,
stop_source: None, stop_source: None,
protected: false,
} }
} }
@ -142,7 +144,7 @@ impl NetworkConnection {
local_stop_token, local_stop_token,
manager_stop_token, manager_stop_token,
connection_id, connection_id,
descriptor.clone(), descriptor,
receiver, receiver,
protocol_connection, protocol_connection,
stats.clone(), stats.clone(),
@ -157,6 +159,7 @@ impl NetworkConnection {
stats, stats,
sender, sender,
stop_source: Some(stop_source), stop_source: Some(stop_source),
protected: false,
} }
} }
@ -165,11 +168,19 @@ impl NetworkConnection {
} }
pub fn connection_descriptor(&self) -> ConnectionDescriptor { pub fn connection_descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
pub fn get_handle(&self) -> ConnectionHandle { pub fn get_handle(&self) -> ConnectionHandle {
ConnectionHandle::new(self.connection_id, self.descriptor.clone(), self.sender.clone()) ConnectionHandle::new(self.connection_id, self.descriptor, self.sender.clone())
}
pub fn is_protected(&self) -> bool {
self.protected
}
pub fn protect(&mut self) {
self.protected = true;
} }
pub fn close(&mut self) { pub fn close(&mut self) {
@ -186,12 +197,12 @@ impl NetworkConnection {
message: Vec<u8>, message: Vec<u8>,
) -> io::Result<NetworkResult<()>> { ) -> io::Result<NetworkResult<()>> {
let ts = get_aligned_timestamp(); let ts = get_aligned_timestamp();
let out = network_result_try!(protocol_connection.send(message).await?); network_result_try!(protocol_connection.send(message).await?);
let mut stats = stats.lock(); let mut stats = stats.lock();
stats.last_message_sent_time.max_assign(Some(ts)); stats.last_message_sent_time.max_assign(Some(ts));
Ok(NetworkResult::Value(out)) Ok(NetworkResult::Value(()))
} }
#[cfg_attr(feature="verbose-tracing", instrument(level="trace", skip(stats), fields(ret.len)))] #[cfg_attr(feature="verbose-tracing", instrument(level="trace", skip(stats), fields(ret.len)))]
@ -223,6 +234,7 @@ impl NetworkConnection {
} }
// Connection receiver loop // Connection receiver loop
#[allow(clippy::too_many_arguments)]
fn process_connection( fn process_connection(
connection_manager: ConnectionManager, connection_manager: ConnectionManager,
local_stop_token: StopToken, local_stop_token: StopToken,
@ -305,19 +317,19 @@ impl NetworkConnection {
let peer_address = protocol_connection.descriptor().remote(); let peer_address = protocol_connection.descriptor().remote();
// Check to see if it is punished // Check to see if it is punished
if address_filter.is_ip_addr_punished(peer_address.to_socket_addr().ip()) { if address_filter.is_ip_addr_punished(peer_address.socket_addr().ip()) {
return RecvLoopAction::Finish; return RecvLoopAction::Finish;
} }
// Check for connection close // Check for connection close
if v.is_no_connection() { if v.is_no_connection() {
log_net!(debug "Connection closed from: {} ({})", peer_address.to_socket_addr(), peer_address.protocol_type()); log_net!(debug "Connection closed from: {} ({})", peer_address.socket_addr(), peer_address.protocol_type());
return RecvLoopAction::Finish; return RecvLoopAction::Finish;
} }
// Punish invalid framing (tcp framing or websocket framing) // Punish invalid framing (tcp framing or websocket framing)
if v.is_invalid_message() { if v.is_invalid_message() {
address_filter.punish_ip_addr(peer_address.to_socket_addr().ip()); address_filter.punish_ip_addr(peer_address.socket_addr().ip());
return RecvLoopAction::Finish; return RecvLoopAction::Finish;
} }
@ -391,6 +403,17 @@ impl NetworkConnection {
.await; .await;
}.instrument(trace_span!("process_connection"))) }.instrument(trace_span!("process_connection")))
} }
pub fn debug_print(&self, cur_ts: Timestamp) -> String {
format!("{} <- {} | {} | est {} sent {} rcvd {}",
self.descriptor.remote_address(),
self.descriptor.local().map(|x| x.to_string()).unwrap_or("---".to_owned()),
self.connection_id.as_u64(),
debug_duration(cur_ts.as_u64().saturating_sub(self.established_time.as_u64())),
self.stats().last_message_sent_time.map(|ts| debug_duration(cur_ts.as_u64().saturating_sub(ts.as_u64())) ).unwrap_or("---".to_owned()),
self.stats().last_message_recv_time.map(|ts| debug_duration(cur_ts.as_u64().saturating_sub(ts.as_u64())) ).unwrap_or("---".to_owned()),
)
}
} }
// Resolves ready when the connection loop has terminated // Resolves ready when the connection loop has terminated

View File

@ -18,6 +18,36 @@ impl NetworkManager {
let this = self.clone(); let this = self.clone();
Box::pin( Box::pin(
async move { async move {
// First try to send data to the last socket we've seen this peer on
let data = if let Some(connection_descriptor) = destination_node_ref.last_connection() {
match this
.net()
.send_data_to_existing_connection(connection_descriptor, data)
.await?
{
None => {
// Update timestamp for this last connection since we just sent to it
destination_node_ref
.set_last_connection(connection_descriptor, get_aligned_timestamp());
return Ok(NetworkResult::value(SendDataKind::Existing(
connection_descriptor,
)));
}
Some(data) => {
// Couldn't send data to existing connection
// so pass the data back out
data
}
}
} else {
// No last connection
data
};
// No existing connection was found or usable, so we proceed to see how to make a new one
// Get the best way to contact this node // Get the best way to contact this node
let contact_method = this.get_node_contact_method(destination_node_ref.clone())?; let contact_method = this.get_node_contact_method(destination_node_ref.clone())?;
@ -308,7 +338,7 @@ impl NetworkManager {
let routing_table = self.routing_table(); let routing_table = self.routing_table();
// If a node is punished, then don't try to contact it // If a node is punished, then don't try to contact it
if target_node_ref.node_ids().iter().find(|nid| self.address_filter().is_node_id_punished(**nid)).is_some() { if target_node_ref.node_ids().iter().any(|nid| self.address_filter().is_node_id_punished(*nid)) {
return Ok(NodeContactMethod::Unreachable); return Ok(NodeContactMethod::Unreachable);
} }
@ -345,10 +375,14 @@ impl NetworkManager {
} }
}; };
// Dial info filter comes from the target node ref // Dial info filter comes from the target node ref but must be filtered by this node's outbound capabilities
let dial_info_filter = target_node_ref.dial_info_filter(); let dial_info_filter = target_node_ref.dial_info_filter().filtered(
&DialInfoFilter::all()
.with_address_type_set(peer_a.signed_node_info().node_info().address_types())
.with_protocol_type_set(peer_a.signed_node_info().node_info().outbound_protocols()));
let sequencing = target_node_ref.sequencing(); let sequencing = target_node_ref.sequencing();
// If the node has had lost questions or failures to send, prefer sequencing // If the node has had lost questions or failures to send, prefer sequencing
// to improve reliability. The node may be experiencing UDP fragmentation drops // to improve reliability. The node may be experiencing UDP fragmentation drops
// or other firewalling issues and may perform better with TCP. // or other firewalling issues and may perform better with TCP.
@ -366,7 +400,7 @@ impl NetworkManager {
dial_info_failures_map.insert(did.dial_info, ts); dial_info_failures_map.insert(did.dial_info, ts);
} }
} }
let dif_sort: Option<Arc<dyn Fn(&DialInfoDetail, &DialInfoDetail) -> core::cmp::Ordering>> = if dial_info_failures_map.is_empty() { let dif_sort: Option<Arc<DialInfoDetailSort>> = if dial_info_failures_map.is_empty() {
None None
} else { } else {
Some(Arc::new(move |a: &DialInfoDetail, b: &DialInfoDetail| { Some(Arc::new(move |a: &DialInfoDetail, b: &DialInfoDetail| {

View File

@ -73,7 +73,7 @@ impl NetworkManager {
inner.stats.clone() inner.stats.clone()
} }
pub fn get_veilid_state(&self) -> VeilidStateNetwork { pub fn get_veilid_state(&self) -> Box<VeilidStateNetwork> {
let has_state = self let has_state = self
.unlocked_inner .unlocked_inner
.components .components
@ -83,12 +83,12 @@ impl NetworkManager {
.unwrap_or(false); .unwrap_or(false);
if !has_state { if !has_state {
return VeilidStateNetwork { return Box::new(VeilidStateNetwork {
started: false, started: false,
bps_down: 0.into(), bps_down: 0.into(),
bps_up: 0.into(), bps_up: 0.into(),
peers: Vec::new(), peers: Vec::new(),
}; });
} }
let routing_table = self.routing_table(); let routing_table = self.routing_table();
@ -100,7 +100,7 @@ impl NetworkManager {
) )
}; };
VeilidStateNetwork { Box::new(VeilidStateNetwork {
started: true, started: true,
bps_down, bps_down,
bps_up, bps_up,
@ -119,7 +119,7 @@ impl NetworkManager {
} }
out out
}, },
} })
} }
pub(super) fn send_network_update(&self) { pub(super) fn send_network_update(&self) {

View File

@ -11,7 +11,7 @@ impl NetworkManager {
) -> EyreResult<()> { ) -> EyreResult<()> {
// go through public_address_inconsistencies_table and time out things that have expired // go through public_address_inconsistencies_table and time out things that have expired
let mut inner = self.inner.lock(); let mut inner = self.inner.lock();
for (_, pait_v) in &mut inner.public_address_inconsistencies_table { for pait_v in inner.public_address_inconsistencies_table.values_mut() {
let mut expired = Vec::new(); let mut expired = Vec::new();
for (addr, exp_ts) in pait_v.iter() { for (addr, exp_ts) in pait_v.iter() {
if *exp_ts <= cur_ts { if *exp_ts <= cur_ts {
@ -79,7 +79,7 @@ impl NetworkManager {
// Get the ip(block) this report is coming from // Get the ip(block) this report is coming from
let reporting_ipblock = ip_to_ipblock( let reporting_ipblock = ip_to_ipblock(
ip6_prefix_size, ip6_prefix_size,
connection_descriptor.remote_address().to_ip_addr(), connection_descriptor.remote_address().ip_addr(),
); );
// Reject public address reports from nodes that we know are behind symmetric nat or // Reject public address reports from nodes that we know are behind symmetric nat or
@ -94,7 +94,7 @@ impl NetworkManager {
// If the socket address reported is the same as the reporter, then this is coming through a relay // If the socket address reported is the same as the reporter, then this is coming through a relay
// or it should be ignored due to local proximity (nodes on the same network block should not be trusted as // or it should be ignored due to local proximity (nodes on the same network block should not be trusted as
// public ip address reporters, only disinterested parties) // public ip address reporters, only disinterested parties)
if reporting_ipblock == ip_to_ipblock(ip6_prefix_size, socket_address.to_ip_addr()) { if reporting_ipblock == ip_to_ipblock(ip6_prefix_size, socket_address.ip_addr()) {
return; return;
} }
@ -167,7 +167,7 @@ impl NetworkManager {
for (reporting_ip_block, a) in pacc { for (reporting_ip_block, a) in pacc {
// If this address is not one of our current addresses (inconsistent) // If this address is not one of our current addresses (inconsistent)
// and we haven't already denylisted the reporting source, // and we haven't already denylisted the reporting source,
// Also check address with port zero in the even we are only checking changes to ip addresses // Also check address with port zero in the event we are only checking changes to ip addresses
if !current_addresses.contains(a) if !current_addresses.contains(a)
&& !current_addresses.contains(&a.with_port(0)) && !current_addresses.contains(&a.with_port(0))
&& !inner && !inner
@ -192,7 +192,7 @@ impl NetworkManager {
let pait = inner let pait = inner
.public_address_inconsistencies_table .public_address_inconsistencies_table
.entry(addr_proto_type_key) .entry(addr_proto_type_key)
.or_insert_with(|| HashMap::new()); .or_insert_with(HashMap::new);
for i in &inconsistencies { for i in &inconsistencies {
pait.insert(*i, exp_ts); pait.insert(*i, exp_ts);
} }
@ -204,7 +204,7 @@ impl NetworkManager {
let pait = inner let pait = inner
.public_address_inconsistencies_table .public_address_inconsistencies_table
.entry(addr_proto_type_key) .entry(addr_proto_type_key)
.or_insert_with(|| HashMap::new()); .or_insert_with(HashMap::new);
let exp_ts = get_aligned_timestamp() let exp_ts = get_aligned_timestamp()
+ PUBLIC_ADDRESS_INCONSISTENCY_PUNISHMENT_TIMEOUT_US; + PUBLIC_ADDRESS_INCONSISTENCY_PUNISHMENT_TIMEOUT_US;
for i in inconsistencies { for i in inconsistencies {

View File

@ -71,16 +71,16 @@ impl Address {
} }
} }
} }
pub fn to_ip_addr(&self) -> IpAddr { pub fn ip_addr(&self) -> IpAddr {
match self { match self {
Self::IPV4(a) => IpAddr::V4(*a), Self::IPV4(a) => IpAddr::V4(*a),
Self::IPV6(a) => IpAddr::V6(*a), Self::IPV6(a) => IpAddr::V6(*a),
} }
} }
pub fn to_socket_addr(&self, port: u16) -> SocketAddr { pub fn socket_addr(&self, port: u16) -> SocketAddr {
SocketAddr::new(self.to_ip_addr(), port) SocketAddr::new(self.ip_addr(), port)
} }
pub fn to_canonical(&self) -> Address { pub fn canonical(&self) -> Address {
match self { match self {
Address::IPV4(v4) => Address::IPV4(*v4), Address::IPV4(v4) => Address::IPV4(*v4),
Address::IPV6(v6) => match v6.to_ipv4() { Address::IPV6(v6) => match v6.to_ipv4() {

View File

@ -1,7 +1,7 @@
#![allow(non_snake_case)] #![allow(non_snake_case)]
use super::*; use super::*;
#[allow(clippy::derive_hash_xor_eq)] #[allow(clippy::derived_hash_with_manual_eq)]
#[derive(Debug, PartialOrd, Ord, Hash, Serialize, Deserialize, EnumSetType)] #[derive(Debug, PartialOrd, Ord, Hash, Serialize, Deserialize, EnumSetType)]
#[enumset(repr = "u8")] #[enumset(repr = "u8")]
pub enum AddressType { pub enum AddressType {

View File

@ -36,10 +36,10 @@ impl fmt::Display for DialInfo {
let split_url = SplitUrl::from_str(&url).unwrap(); let split_url = SplitUrl::from_str(&url).unwrap();
match split_url.host { match split_url.host {
SplitUrlHost::Hostname(_) => { SplitUrlHost::Hostname(_) => {
write!(f, "ws|{}|{}", di.socket_address.to_ip_addr(), di.request) write!(f, "ws|{}|{}", di.socket_address.ip_addr(), di.request)
} }
SplitUrlHost::IpAddr(a) => { SplitUrlHost::IpAddr(a) => {
if di.socket_address.to_ip_addr() == a { if di.socket_address.ip_addr() == a {
write!(f, "ws|{}", di.request) write!(f, "ws|{}", di.request)
} else { } else {
panic!("resolved address does not match url: {}", di.request); panic!("resolved address does not match url: {}", di.request);
@ -52,7 +52,7 @@ impl fmt::Display for DialInfo {
let split_url = SplitUrl::from_str(&url).unwrap(); let split_url = SplitUrl::from_str(&url).unwrap();
match split_url.host { match split_url.host {
SplitUrlHost::Hostname(_) => { SplitUrlHost::Hostname(_) => {
write!(f, "wss|{}|{}", di.socket_address.to_ip_addr(), di.request) write!(f, "wss|{}|{}", di.socket_address.ip_addr(), di.request)
} }
SplitUrlHost::IpAddr(_) => { SplitUrlHost::IpAddr(_) => {
panic!( panic!(
@ -143,22 +143,22 @@ impl FromStr for DialInfo {
impl DialInfo { impl DialInfo {
pub fn udp_from_socketaddr(socket_addr: SocketAddr) -> Self { pub fn udp_from_socketaddr(socket_addr: SocketAddr) -> Self {
Self::UDP(DialInfoUDP { Self::UDP(DialInfoUDP {
socket_address: SocketAddress::from_socket_addr(socket_addr).to_canonical(), socket_address: SocketAddress::from_socket_addr(socket_addr).canonical(),
}) })
} }
pub fn tcp_from_socketaddr(socket_addr: SocketAddr) -> Self { pub fn tcp_from_socketaddr(socket_addr: SocketAddr) -> Self {
Self::TCP(DialInfoTCP { Self::TCP(DialInfoTCP {
socket_address: SocketAddress::from_socket_addr(socket_addr).to_canonical(), socket_address: SocketAddress::from_socket_addr(socket_addr).canonical(),
}) })
} }
pub fn udp(socket_address: SocketAddress) -> Self { pub fn udp(socket_address: SocketAddress) -> Self {
Self::UDP(DialInfoUDP { Self::UDP(DialInfoUDP {
socket_address: socket_address.to_canonical(), socket_address: socket_address.canonical(),
}) })
} }
pub fn tcp(socket_address: SocketAddress) -> Self { pub fn tcp(socket_address: SocketAddress) -> Self {
Self::TCP(DialInfoTCP { Self::TCP(DialInfoTCP {
socket_address: socket_address.to_canonical(), socket_address: socket_address.canonical(),
}) })
} }
pub fn try_ws(socket_address: SocketAddress, url: String) -> VeilidAPIResult<Self> { pub fn try_ws(socket_address: SocketAddress, url: String) -> VeilidAPIResult<Self> {
@ -173,7 +173,7 @@ impl DialInfo {
apibail_parse_error!("socket address port doesn't match url port", url); apibail_parse_error!("socket address port doesn't match url port", url);
} }
if let SplitUrlHost::IpAddr(a) = split_url.host { if let SplitUrlHost::IpAddr(a) = split_url.host {
if socket_address.to_ip_addr() != a { if socket_address.ip_addr() != a {
apibail_parse_error!( apibail_parse_error!(
format!("request address does not match socket address: {}", a), format!("request address does not match socket address: {}", a),
socket_address socket_address
@ -181,7 +181,7 @@ impl DialInfo {
} }
} }
Ok(Self::WS(DialInfoWS { Ok(Self::WS(DialInfoWS {
socket_address: socket_address.to_canonical(), socket_address: socket_address.canonical(),
request: url[5..].to_string(), request: url[5..].to_string(),
})) }))
} }
@ -203,7 +203,7 @@ impl DialInfo {
); );
} }
Ok(Self::WSS(DialInfoWSS { Ok(Self::WSS(DialInfoWSS {
socket_address: socket_address.to_canonical(), socket_address: socket_address.canonical(),
request: url[6..].to_string(), request: url[6..].to_string(),
})) }))
} }
@ -242,12 +242,12 @@ impl DialInfo {
Self::WSS(di) => di.socket_address, Self::WSS(di) => di.socket_address,
} }
} }
pub fn to_ip_addr(&self) -> IpAddr { pub fn ip_addr(&self) -> IpAddr {
match self { match self {
Self::UDP(di) => di.socket_address.to_ip_addr(), Self::UDP(di) => di.socket_address.ip_addr(),
Self::TCP(di) => di.socket_address.to_ip_addr(), Self::TCP(di) => di.socket_address.ip_addr(),
Self::WS(di) => di.socket_address.to_ip_addr(), Self::WS(di) => di.socket_address.ip_addr(),
Self::WSS(di) => di.socket_address.to_ip_addr(), Self::WSS(di) => di.socket_address.ip_addr(),
} }
} }
pub fn port(&self) -> u16 { pub fn port(&self) -> u16 {
@ -268,13 +268,13 @@ impl DialInfo {
} }
pub fn to_socket_addr(&self) -> SocketAddr { pub fn to_socket_addr(&self) -> SocketAddr {
match self { match self {
Self::UDP(di) => di.socket_address.to_socket_addr(), Self::UDP(di) => di.socket_address.socket_addr(),
Self::TCP(di) => di.socket_address.to_socket_addr(), Self::TCP(di) => di.socket_address.socket_addr(),
Self::WS(di) => di.socket_address.to_socket_addr(), Self::WS(di) => di.socket_address.socket_addr(),
Self::WSS(di) => di.socket_address.to_socket_addr(), Self::WSS(di) => di.socket_address.socket_addr(),
} }
} }
pub fn to_peer_address(&self) -> PeerAddress { pub fn peer_address(&self) -> PeerAddress {
match self { match self {
Self::UDP(di) => PeerAddress::new(di.socket_address, ProtocolType::UDP), Self::UDP(di) => PeerAddress::new(di.socket_address, ProtocolType::UDP),
Self::TCP(di) => PeerAddress::new(di.socket_address, ProtocolType::TCP), Self::TCP(di) => PeerAddress::new(di.socket_address, ProtocolType::TCP),
@ -376,11 +376,11 @@ impl DialInfo {
"udp" => Self::udp_from_socketaddr(sa), "udp" => Self::udp_from_socketaddr(sa),
"tcp" => Self::tcp_from_socketaddr(sa), "tcp" => Self::tcp_from_socketaddr(sa),
"ws" => Self::try_ws( "ws" => Self::try_ws(
SocketAddress::from_socket_addr(sa).to_canonical(), SocketAddress::from_socket_addr(sa).canonical(),
url.to_string(), url.to_string(),
)?, )?,
"wss" => Self::try_wss( "wss" => Self::try_wss(
SocketAddress::from_socket_addr(sa).to_canonical(), SocketAddress::from_socket_addr(sa).canonical(),
url.to_string(), url.to_string(),
)?, )?,
_ => { _ => {
@ -395,13 +395,13 @@ impl DialInfo {
match self { match self {
DialInfo::UDP(di) => ( DialInfo::UDP(di) => (
format!("U{}", di.socket_address.port()), format!("U{}", di.socket_address.port()),
intf::ptr_lookup(di.socket_address.to_ip_addr()) intf::ptr_lookup(di.socket_address.ip_addr())
.await .await
.unwrap_or_else(|_| di.socket_address.to_string()), .unwrap_or_else(|_| di.socket_address.to_string()),
), ),
DialInfo::TCP(di) => ( DialInfo::TCP(di) => (
format!("T{}", di.socket_address.port()), format!("T{}", di.socket_address.port()),
intf::ptr_lookup(di.socket_address.to_ip_addr()) intf::ptr_lookup(di.socket_address.ip_addr())
.await .await
.unwrap_or_else(|_| di.socket_address.to_string()), .unwrap_or_else(|_| di.socket_address.to_string()),
), ),
@ -447,11 +447,11 @@ impl DialInfo {
} }
pub async fn to_url(&self) -> String { pub async fn to_url(&self) -> String {
match self { match self {
DialInfo::UDP(di) => intf::ptr_lookup(di.socket_address.to_ip_addr()) DialInfo::UDP(di) => intf::ptr_lookup(di.socket_address.ip_addr())
.await .await
.map(|h| format!("udp://{}:{}", h, di.socket_address.port())) .map(|h| format!("udp://{}:{}", h, di.socket_address.port()))
.unwrap_or_else(|_| format!("udp://{}", di.socket_address)), .unwrap_or_else(|_| format!("udp://{}", di.socket_address)),
DialInfo::TCP(di) => intf::ptr_lookup(di.socket_address.to_ip_addr()) DialInfo::TCP(di) => intf::ptr_lookup(di.socket_address.ip_addr())
.await .await
.map(|h| format!("tcp://{}:{}", h, di.socket_address.port())) .map(|h| format!("tcp://{}:{}", h, di.socket_address.port()))
.unwrap_or_else(|_| format!("tcp://{}", di.socket_address)), .unwrap_or_else(|_| format!("tcp://{}", di.socket_address)),

View File

@ -4,7 +4,7 @@ use super::*;
// Keep member order appropriate for sorting < preference // Keep member order appropriate for sorting < preference
// Must match DialInfo order // Must match DialInfo order
#[allow(clippy::derive_hash_xor_eq)] #[allow(clippy::derived_hash_with_manual_eq)]
#[derive(Debug, PartialOrd, Ord, Hash, EnumSetType, Serialize, Deserialize)] #[derive(Debug, PartialOrd, Ord, Hash, EnumSetType, Serialize, Deserialize)]
#[enumset(repr = "u8")] #[enumset(repr = "u8")]
pub enum LowLevelProtocolType { pub enum LowLevelProtocolType {

View File

@ -10,7 +10,7 @@ pub struct PeerAddress {
impl PeerAddress { impl PeerAddress {
pub fn new(socket_address: SocketAddress, protocol_type: ProtocolType) -> Self { pub fn new(socket_address: SocketAddress, protocol_type: ProtocolType) -> Self {
Self { Self {
socket_address: socket_address.to_canonical(), socket_address: socket_address.canonical(),
protocol_type, protocol_type,
} }
} }
@ -23,8 +23,8 @@ impl PeerAddress {
self.protocol_type self.protocol_type
} }
pub fn to_socket_addr(&self) -> SocketAddr { pub fn socket_addr(&self) -> SocketAddr {
self.socket_address.to_socket_addr() self.socket_address.socket_addr()
} }
pub fn address_type(&self) -> AddressType { pub fn address_type(&self) -> AddressType {
@ -42,7 +42,10 @@ impl FromStr for PeerAddress {
type Err = VeilidAPIError; type Err = VeilidAPIError;
fn from_str(s: &str) -> VeilidAPIResult<PeerAddress> { fn from_str(s: &str) -> VeilidAPIResult<PeerAddress> {
let Some((first, second)) = s.split_once(':') else { let Some((first, second)) = s.split_once(':') else {
return Err(VeilidAPIError::parse_error("PeerAddress is missing a colon: {}", s)); return Err(VeilidAPIError::parse_error(
"PeerAddress is missing a colon: {}",
s,
));
}; };
let protocol_type = ProtocolType::from_str(first)?; let protocol_type = ProtocolType::from_str(first)?;
let socket_address = SocketAddress::from_str(second)?; let socket_address = SocketAddress::from_str(second)?;

View File

@ -3,7 +3,7 @@ use super::*;
// Keep member order appropriate for sorting < preference // Keep member order appropriate for sorting < preference
// Must match DialInfo order // Must match DialInfo order
#[allow(clippy::derive_hash_xor_eq)] #[allow(clippy::derived_hash_with_manual_eq)]
#[derive(Debug, PartialOrd, Ord, Hash, EnumSetType, Serialize, Deserialize)] #[derive(Debug, PartialOrd, Ord, Hash, EnumSetType, Serialize, Deserialize)]
#[enumset(repr = "u8")] #[enumset(repr = "u8")]
pub enum ProtocolType { pub enum ProtocolType {

View File

@ -34,27 +34,27 @@ impl SocketAddress {
self.port = port self.port = port
} }
pub fn with_port(&self, port: u16) -> Self { pub fn with_port(&self, port: u16) -> Self {
let mut sa = self.clone(); let mut sa = *self;
sa.port = port; sa.port = port;
sa sa
} }
pub fn to_canonical(&self) -> SocketAddress { pub fn canonical(&self) -> SocketAddress {
SocketAddress { SocketAddress {
address: self.address.to_canonical(), address: self.address.canonical(),
port: self.port, port: self.port,
} }
} }
pub fn to_ip_addr(&self) -> IpAddr { pub fn ip_addr(&self) -> IpAddr {
self.address.to_ip_addr() self.address.ip_addr()
} }
pub fn to_socket_addr(&self) -> SocketAddr { pub fn socket_addr(&self) -> SocketAddr {
self.address.to_socket_addr(self.port) self.address.socket_addr(self.port)
} }
} }
impl fmt::Display for SocketAddress { impl fmt::Display for SocketAddress {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
write!(f, "{}", self.to_socket_addr()) write!(f, "{}", self.socket_addr())
} }
} }

View File

@ -32,18 +32,18 @@ pub const PUBLIC_INTERNET_CAPABILITIES: [Capability; PUBLIC_INTERNET_CAPABILITIE
CAP_BLOCKSTORE, CAP_BLOCKSTORE,
]; ];
#[cfg(feature = "unstable-blockstore")] // #[cfg(feature = "unstable-blockstore")]
const LOCAL_NETWORK_CAPABILITIES_LEN: usize = 3; // const LOCAL_NETWORK_CAPABILITIES_LEN: usize = 3;
#[cfg(not(feature = "unstable-blockstore"))] // #[cfg(not(feature = "unstable-blockstore"))]
const LOCAL_NETWORK_CAPABILITIES_LEN: usize = 2; // const LOCAL_NETWORK_CAPABILITIES_LEN: usize = 2;
pub const LOCAL_NETWORK_CAPABILITIES: [Capability; LOCAL_NETWORK_CAPABILITIES_LEN] = [ // pub const LOCAL_NETWORK_CAPABILITIES: [Capability; LOCAL_NETWORK_CAPABILITIES_LEN] = [
//CAP_RELAY, // //CAP_RELAY,
CAP_DHT, // CAP_DHT,
CAP_APPMESSAGE, // CAP_APPMESSAGE,
#[cfg(feature = "unstable-blockstore")] // #[cfg(feature = "unstable-blockstore")]
CAP_BLOCKSTORE, // CAP_BLOCKSTORE,
]; // ];
pub const MAX_CAPABILITIES: usize = 64; pub const MAX_CAPABILITIES: usize = 64;
@ -149,7 +149,7 @@ impl Network {
if self if self
.network_manager() .network_manager()
.address_filter() .address_filter()
.is_ip_addr_punished(dial_info.address().to_ip_addr()) .is_ip_addr_punished(dial_info.address().ip_addr())
{ {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
@ -173,7 +173,7 @@ impl Network {
// Network accounting // Network accounting
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
Ok(NetworkResult::Value(())) Ok(NetworkResult::Value(()))
}) })
@ -202,7 +202,7 @@ impl Network {
if self if self
.network_manager() .network_manager()
.address_filter() .address_filter()
.is_ip_addr_punished(dial_info.address().to_ip_addr()) .is_ip_addr_punished(dial_info.address().ip_addr())
{ {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
@ -227,7 +227,7 @@ impl Network {
network_result_try!(pnc.send(data).await.wrap_err("send failure")?); network_result_try!(pnc.send(data).await.wrap_err("send failure")?);
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
let out = let out =
network_result_try!(network_result_try!(timeout(timeout_ms, pnc.recv()) network_result_try!(network_result_try!(timeout(timeout_ms, pnc.recv())
@ -235,10 +235,8 @@ impl Network {
.into_network_result()) .into_network_result())
.wrap_err("recv failure")?); .wrap_err("recv failure")?);
self.network_manager().stats_packet_rcvd( self.network_manager()
dial_info.to_ip_addr(), .stats_packet_rcvd(dial_info.ip_addr(), ByteCount::new(out.len() as u64));
ByteCount::new(out.len() as u64),
);
Ok(NetworkResult::Value(out)) Ok(NetworkResult::Value(out))
} }
@ -273,7 +271,7 @@ impl Network {
ConnectionHandleSendResult::Sent => { ConnectionHandleSendResult::Sent => {
// Network accounting // Network accounting
self.network_manager().stats_packet_sent( self.network_manager().stats_packet_sent(
descriptor.remote().to_socket_addr().ip(), descriptor.remote().socket_addr().ip(),
ByteCount::new(data_len as u64), ByteCount::new(data_len as u64),
); );
@ -324,7 +322,7 @@ impl Network {
// Network accounting // Network accounting
self.network_manager() self.network_manager()
.stats_packet_sent(dial_info.to_ip_addr(), ByteCount::new(data_len as u64)); .stats_packet_sent(dial_info.ip_addr(), ByteCount::new(data_len as u64));
Ok(NetworkResult::value(connection_descriptor)) Ok(NetworkResult::value(connection_descriptor))
}) })
@ -351,14 +349,24 @@ impl Network {
let family_global = AddressTypeSet::from(AddressType::IPV4); let family_global = AddressTypeSet::from(AddressType::IPV4);
let family_local = AddressTypeSet::from(AddressType::IPV4); let family_local = AddressTypeSet::from(AddressType::IPV4);
let public_internet_capabilities = {
PUBLIC_INTERNET_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
ProtocolConfig { ProtocolConfig {
outbound, outbound,
inbound, inbound,
family_global, family_global,
family_local, family_local,
local_network_capabilities: vec![],
public_internet_capabilities,
} }
}; };
self.inner.lock().protocol_config = protocol_config; self.inner.lock().protocol_config = protocol_config.clone();
// Start editing routing table // Start editing routing table
let mut editor_public_internet = self let mut editor_public_internet = self
@ -369,20 +377,11 @@ impl Network {
// set up the routing table's network config // set up the routing table's network config
// if we have static public dialinfo, upgrade our network class // if we have static public dialinfo, upgrade our network class
let public_internet_capabilities = {
let c = self.config.get();
PUBLIC_INTERNET_CAPABILITIES
.iter()
.copied()
.filter(|cap| !c.capabilities.disable.contains(cap))
.collect::<Vec<Capability>>()
};
editor_public_internet.setup_network( editor_public_internet.setup_network(
protocol_config.outbound, protocol_config.outbound,
protocol_config.inbound, protocol_config.inbound,
protocol_config.family_global, protocol_config.family_global,
public_internet_capabilities, protocol_config.public_internet_capabilities.clone(),
); );
editor_public_internet.set_network_class(Some(NetworkClass::WebApp)); editor_public_internet.set_network_class(Some(NetworkClass::WebApp));
@ -434,11 +433,11 @@ impl Network {
Vec::new() Vec::new()
} }
pub fn get_local_port(&self, protocol_type: ProtocolType) -> Option<u16> { pub fn get_local_port(&self, _protocol_type: ProtocolType) -> Option<u16> {
None None
} }
pub fn get_preferred_local_address(&self, dial_info: &DialInfo) -> Option<SocketAddr> { pub fn get_preferred_local_address(&self, _dial_info: &DialInfo) -> Option<SocketAddr> {
None None
} }

View File

@ -19,7 +19,7 @@ impl ProtocolNetworkConnection {
timeout_ms: u32, timeout_ms: u32,
address_filter: AddressFilter, address_filter: AddressFilter,
) -> io::Result<NetworkResult<ProtocolNetworkConnection>> { ) -> io::Result<NetworkResult<ProtocolNetworkConnection>> {
if address_filter.is_ip_addr_punished(dial_info.address().to_ip_addr()) { if address_filter.is_ip_addr_punished(dial_info.address().ip_addr()) {
return Ok(NetworkResult::no_connection_other("punished")); return Ok(NetworkResult::no_connection_other("punished"));
} }
match dial_info.protocol_type() { match dial_info.protocol_type() {

View File

@ -56,7 +56,7 @@ impl WebsocketNetworkConnection {
} }
pub fn descriptor(&self) -> ConnectionDescriptor { pub fn descriptor(&self) -> ConnectionDescriptor {
self.descriptor.clone() self.descriptor
} }
// #[instrument(level = "trace", err, skip(self))] // #[instrument(level = "trace", err, skip(self))]
@ -144,7 +144,7 @@ impl WebsocketProtocolHandler {
// Make our connection descriptor // Make our connection descriptor
let wnc = WebsocketNetworkConnection::new( let wnc = WebsocketNetworkConnection::new(
ConnectionDescriptor::new_no_local(dial_info.to_peer_address()), ConnectionDescriptor::new_no_local(dial_info.peer_address()),
wsmeta, wsmeta,
wsio, wsio,
); );

View File

@ -261,9 +261,8 @@ impl ReceiptManager {
// Wait on all the multi-call callbacks // Wait on all the multi-call callbacks
loop { loop {
match callbacks.next().timeout_at(stop_token.clone()).await { if let Ok(None) | Err(_) = callbacks.next().timeout_at(stop_token.clone()).await {
Ok(Some(_)) => {} break;
Ok(None) | Err(_) => break,
} }
} }
} }
@ -307,7 +306,7 @@ impl ReceiptManager {
// Wait for everything to stop // Wait for everything to stop
debug!("waiting for timeout task to stop"); debug!("waiting for timeout task to stop");
if !timeout_task.join().await.is_ok() { if timeout_task.join().await.is_err() {
panic!("joining timeout task failed"); panic!("joining timeout task failed");
} }
@ -333,7 +332,7 @@ impl ReceiptManager {
let mut inner = self.inner.lock(); let mut inner = self.inner.lock();
inner.records_by_nonce.insert(receipt_nonce, record); inner.records_by_nonce.insert(receipt_nonce, record);
Self::update_next_oldest_timestamp(&mut *inner); Self::update_next_oldest_timestamp(&mut inner);
} }
pub fn record_single_shot_receipt( pub fn record_single_shot_receipt(
@ -351,7 +350,7 @@ impl ReceiptManager {
let mut inner = self.inner.lock(); let mut inner = self.inner.lock();
inner.records_by_nonce.insert(receipt_nonce, record); inner.records_by_nonce.insert(receipt_nonce, record);
Self::update_next_oldest_timestamp(&mut *inner); Self::update_next_oldest_timestamp(&mut inner);
} }
fn update_next_oldest_timestamp(inner: &mut ReceiptManagerInner) { fn update_next_oldest_timestamp(inner: &mut ReceiptManagerInner) {
@ -382,7 +381,7 @@ impl ReceiptManager {
bail!("receipt not recorded"); bail!("receipt not recorded");
} }
}; };
Self::update_next_oldest_timestamp(&mut *inner); Self::update_next_oldest_timestamp(&mut inner);
record record
}; };
@ -448,14 +447,12 @@ impl ReceiptManager {
let receipt_event = match receipt_returned { let receipt_event = match receipt_returned {
ReceiptReturned::OutOfBand => ReceiptEvent::ReturnedOutOfBand, ReceiptReturned::OutOfBand => ReceiptEvent::ReturnedOutOfBand,
ReceiptReturned::Safety => ReceiptEvent::ReturnedSafety, ReceiptReturned::Safety => ReceiptEvent::ReturnedSafety,
ReceiptReturned::InBand { ReceiptReturned::InBand { inbound_noderef } => {
ref inbound_noderef, ReceiptEvent::ReturnedInBand { inbound_noderef }
} => ReceiptEvent::ReturnedInBand { }
inbound_noderef: inbound_noderef.clone(), ReceiptReturned::Private { private_route } => {
}, ReceiptEvent::ReturnedPrivate { private_route }
ReceiptReturned::Private { ref private_route } => ReceiptEvent::ReturnedPrivate { }
private_route: private_route.clone(),
},
}; };
let callback_future = Self::perform_callback(receipt_event, &mut record_mut); let callback_future = Self::perform_callback(receipt_event, &mut record_mut);
@ -464,7 +461,7 @@ impl ReceiptManager {
if record_mut.returns_so_far == record_mut.expected_returns { if record_mut.returns_so_far == record_mut.expected_returns {
inner.records_by_nonce.remove(&receipt_nonce); inner.records_by_nonce.remove(&receipt_nonce);
Self::update_next_oldest_timestamp(&mut *inner); Self::update_next_oldest_timestamp(&mut inner);
} }
(callback_future, stop_token) (callback_future, stop_token)
}; };

View File

@ -75,8 +75,8 @@ impl Bucket {
}); });
} }
let bucket_data = SerializedBucketData { entries }; let bucket_data = SerializedBucketData { entries };
let out = serialize_json_bytes(&bucket_data);
out serialize_json_bytes(bucket_data)
} }
/// Create a new entry with a node_id of this crypto kind and return it /// Create a new entry with a node_id of this crypto kind and return it
@ -129,11 +129,8 @@ impl Bucket {
let mut extra_entries = bucket_len - bucket_depth; let mut extra_entries = bucket_len - bucket_depth;
// Get the sorted list of entries by their kick order // Get the sorted list of entries by their kick order
let mut sorted_entries: Vec<(PublicKey, Arc<BucketEntry>)> = self let mut sorted_entries: Vec<(PublicKey, Arc<BucketEntry>)> =
.entries self.entries.iter().map(|(k, v)| (*k, v.clone())).collect();
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect();
let cur_ts = get_aligned_timestamp(); let cur_ts = get_aligned_timestamp();
sorted_entries.sort_by(|a, b| -> core::cmp::Ordering { sorted_entries.sort_by(|a, b| -> core::cmp::Ordering {
if a.0 == b.0 { if a.0 == b.0 {

View File

@ -223,7 +223,7 @@ impl BucketEntryInner {
// Lower timestamp to the front, recent or no timestamp is at the end // Lower timestamp to the front, recent or no timestamp is at the end
if let Some(e1_ts) = &e1.peer_stats.rpc_stats.first_consecutive_seen_ts { if let Some(e1_ts) = &e1.peer_stats.rpc_stats.first_consecutive_seen_ts {
if let Some(e2_ts) = &e2.peer_stats.rpc_stats.first_consecutive_seen_ts { if let Some(e2_ts) = &e2.peer_stats.rpc_stats.first_consecutive_seen_ts {
e1_ts.cmp(&e2_ts) e1_ts.cmp(e2_ts)
} else { } else {
std::cmp::Ordering::Less std::cmp::Ordering::Less
} }
@ -298,7 +298,7 @@ impl BucketEntryInner {
// If we're updating an entry's node info, purge all // If we're updating an entry's node info, purge all
// but the last connection in our last connections list // but the last connection in our last connections list
// because the dial info could have changed and it's safer to just reconnect. // because the dial info could have changed and it's safer to just reconnect.
// The latest connection would have been the once we got the new node info // The latest connection would have been the one we got the new node info
// over so that connection is still valid. // over so that connection is still valid.
if node_info_changed { if node_info_changed {
self.clear_last_connections_except_latest(); self.clear_last_connections_except_latest();
@ -437,7 +437,7 @@ impl BucketEntryInner {
// Clears the table of last connections except the most recent one // Clears the table of last connections except the most recent one
pub fn clear_last_connections_except_latest(&mut self) { pub fn clear_last_connections_except_latest(&mut self) {
if self.last_connections.len() == 0 { if self.last_connections.is_empty() {
// No last_connections // No last_connections
return; return;
} }
@ -454,7 +454,7 @@ impl BucketEntryInner {
let Some(most_recent_connection) = most_recent_connection else { let Some(most_recent_connection) = most_recent_connection else {
return; return;
}; };
for (k, _) in &self.last_connections { for k in self.last_connections.keys() {
if k != most_recent_connection { if k != most_recent_connection {
dead_keys.push(k.clone()); dead_keys.push(k.clone());
} }
@ -492,7 +492,7 @@ impl BucketEntryInner {
} }
if !only_live { if !only_live {
return Some(v.clone()); return Some(*v);
} }
// Check if the connection is still considered live // Check if the connection is still considered live
@ -509,7 +509,7 @@ impl BucketEntryInner {
}; };
if alive { if alive {
Some(v.clone()) Some(*v)
} else { } else {
None None
} }
@ -583,13 +583,11 @@ impl BucketEntryInner {
RoutingDomain::LocalNetwork => self RoutingDomain::LocalNetwork => self
.local_network .local_network
.node_status .node_status
.as_ref() .as_ref().cloned(),
.map(|ns| ns.clone()),
RoutingDomain::PublicInternet => self RoutingDomain::PublicInternet => self
.public_internet .public_internet
.node_status .node_status
.as_ref() .as_ref().cloned()
.map(|ns| ns.clone()),
} }
} }
@ -649,9 +647,10 @@ impl BucketEntryInner {
return false; return false;
} }
// if we have seen the node consistently for longer that UNRELIABLE_PING_SPAN_SECS
match self.peer_stats.rpc_stats.first_consecutive_seen_ts { match self.peer_stats.rpc_stats.first_consecutive_seen_ts {
// If we have not seen seen a node consecutively, it can't be reliable
None => false, None => false,
// If we have seen the node consistently for longer than UNRELIABLE_PING_SPAN_SECS then it is reliable
Some(ts) => { Some(ts) => {
cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64) cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64)
} }
@ -662,11 +661,13 @@ impl BucketEntryInner {
if self.peer_stats.rpc_stats.failed_to_send >= NEVER_REACHED_PING_COUNT { if self.peer_stats.rpc_stats.failed_to_send >= NEVER_REACHED_PING_COUNT {
return true; return true;
} }
// if we have not heard from the node at all for the duration of the unreliable ping span
// a node is not dead if we haven't heard from it yet,
// but we give it NEVER_REACHED_PING_COUNT chances to ping before we say it's dead
match self.peer_stats.rpc_stats.last_seen_ts { match self.peer_stats.rpc_stats.last_seen_ts {
None => self.peer_stats.rpc_stats.recent_lost_answers < NEVER_REACHED_PING_COUNT, // a node is not dead if we haven't heard from it yet,
// but we give it NEVER_REACHED_PING_COUNT chances to ping before we say it's dead
None => self.peer_stats.rpc_stats.recent_lost_answers >= NEVER_REACHED_PING_COUNT,
// return dead if we have not heard from the node at all for the duration of the unreliable ping span
Some(ts) => { Some(ts) => {
cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64) cur_ts.saturating_sub(ts) >= TimestampDuration::new(UNRELIABLE_PING_SPAN_SECS as u64 * 1000000u64)
} }
@ -889,7 +890,7 @@ impl BucketEntry {
F: FnOnce(&RoutingTableInner, &BucketEntryInner) -> R, F: FnOnce(&RoutingTableInner, &BucketEntryInner) -> R,
{ {
let inner = self.inner.read(); let inner = self.inner.read();
f(rti, &*inner) f(rti, &inner)
} }
// Note, that this requires -also- holding the RoutingTable write lock, as a // Note, that this requires -also- holding the RoutingTable write lock, as a
@ -899,7 +900,7 @@ impl BucketEntry {
F: FnOnce(&mut RoutingTableInner, &mut BucketEntryInner) -> R, F: FnOnce(&mut RoutingTableInner, &mut BucketEntryInner) -> R,
{ {
let mut inner = self.inner.write(); let mut inner = self.inner.write();
f(rti, &mut *inner) f(rti, &mut inner)
} }
// Internal inner access for RoutingTableInner only // Internal inner access for RoutingTableInner only
@ -908,7 +909,7 @@ impl BucketEntry {
F: FnOnce(&BucketEntryInner) -> R, F: FnOnce(&BucketEntryInner) -> R,
{ {
let inner = self.inner.read(); let inner = self.inner.read();
f(&*inner) f(&inner)
} }
// Internal inner access for RoutingTableInner only // Internal inner access for RoutingTableInner only
@ -917,7 +918,7 @@ impl BucketEntry {
F: FnOnce(&mut BucketEntryInner) -> R, F: FnOnce(&mut BucketEntryInner) -> R,
{ {
let mut inner = self.inner.write(); let mut inner = self.inner.write();
f(&mut *inner) f(&mut inner)
} }
} }

View File

@ -73,6 +73,7 @@ impl RoutingTable {
" Self Transfer Stats: {:#?}\n\n", " Self Transfer Stats: {:#?}\n\n",
inner.self_transfer_stats inner.self_transfer_stats
); );
out += &format!(" Version: {}\n\n", veilid_version_string());
out out
} }
@ -111,7 +112,7 @@ impl RoutingTable {
let mut out = String::new(); let mut out = String::new();
out += &format!("Entries: {}\n", inner.bucket_entry_count()); out += &format!("Entries: {}\n", inner.bucket_entry_count());
out += &format!(" Live:\n"); out += " Live:\n";
for ec in inner.cached_entry_counts() { for ec in inner.cached_entry_counts() {
let routing_domain = ec.0 .0; let routing_domain = ec.0 .0;
let crypto_kind = ec.0 .1; let crypto_kind = ec.0 .1;

View File

@ -71,12 +71,14 @@ pub type SerializedBucketMap = BTreeMap<CryptoKind, SerializedBuckets>;
#[derive(Clone, Debug, Default, Eq, PartialEq)] #[derive(Clone, Debug, Default, Eq, PartialEq)]
pub struct RoutingTableHealth { pub struct RoutingTableHealth {
/// Number of reliable (responsive) entries in the routing table /// Number of reliable (long-term responsive) entries in the routing table
pub reliable_entry_count: usize, pub reliable_entry_count: usize,
/// Number of unreliable (occasionally unresponsive) entries in the routing table /// Number of unreliable (occasionally unresponsive) entries in the routing table
pub unreliable_entry_count: usize, pub unreliable_entry_count: usize,
/// Number of dead (always unresponsive) entries in the routing table /// Number of dead (always unresponsive) entries in the routing table
pub dead_entry_count: usize, pub dead_entry_count: usize,
/// Number of live (responsive) entries in the routing table per RoutingDomain and CryptoKind
pub live_entry_counts: BTreeMap<(RoutingDomain, CryptoKind), usize>,
/// If PublicInternet network class is valid yet /// If PublicInternet network class is valid yet
pub public_internet_ready: bool, pub public_internet_ready: bool,
/// If LocalNetwork network class is valid yet /// If LocalNetwork network class is valid yet
@ -129,7 +131,7 @@ impl RoutingTableUnlockedInner {
where where
F: FnOnce(&VeilidConfigInner) -> R, F: FnOnce(&VeilidConfigInner) -> R,
{ {
f(&*self.config.get()) f(&self.config.get())
} }
pub fn node_id(&self, kind: CryptoKind) -> TypedKey { pub fn node_id(&self, kind: CryptoKind) -> TypedKey {
@ -388,11 +390,15 @@ impl RoutingTable {
} }
// Caches valid, load saved routing table // Caches valid, load saved routing table
let Some(serialized_bucket_map): Option<SerializedBucketMap> = db.load_json(0, SERIALIZED_BUCKET_MAP).await? else { let Some(serialized_bucket_map): Option<SerializedBucketMap> =
db.load_json(0, SERIALIZED_BUCKET_MAP).await?
else {
log_rtab!(debug "no bucket map in saved routing table"); log_rtab!(debug "no bucket map in saved routing table");
return Ok(()); return Ok(());
}; };
let Some(all_entry_bytes): Option<SerializedBuckets> = db.load_json(0, ALL_ENTRY_BYTES).await? else { let Some(all_entry_bytes): Option<SerializedBuckets> =
db.load_json(0, ALL_ENTRY_BYTES).await?
else {
log_rtab!(debug "no all_entry_bytes in saved routing table"); log_rtab!(debug "no all_entry_bytes in saved routing table");
return Ok(()); return Ok(());
}; };
@ -537,7 +543,7 @@ impl RoutingTable {
peer_b: &PeerInfo, peer_b: &PeerInfo,
dial_info_filter: DialInfoFilter, dial_info_filter: DialInfoFilter,
sequencing: Sequencing, sequencing: Sequencing,
dif_sort: Option<Arc<dyn Fn(&DialInfoDetail, &DialInfoDetail) -> core::cmp::Ordering>>, dif_sort: Option<Arc<DialInfoDetailSort>>,
) -> ContactMethod { ) -> ContactMethod {
self.inner.read().get_contact_method( self.inner.read().get_contact_method(
routing_domain, routing_domain,
@ -881,7 +887,7 @@ impl RoutingTable {
crypto_kind: CryptoKind, crypto_kind: CryptoKind,
max_per_type: usize, max_per_type: usize,
) -> Vec<NodeRef> { ) -> Vec<NodeRef> {
let protocol_types = vec![ let protocol_types = [
ProtocolType::UDP, ProtocolType::UDP,
ProtocolType::TCP, ProtocolType::TCP,
ProtocolType::WS, ProtocolType::WS,
@ -889,8 +895,8 @@ impl RoutingTable {
]; ];
let protocol_types_len = protocol_types.len(); let protocol_types_len = protocol_types.len();
let mut nodes_proto_v4 = vec![0usize, 0usize, 0usize, 0usize]; let mut nodes_proto_v4 = [0usize, 0usize, 0usize, 0usize];
let mut nodes_proto_v6 = vec![0usize, 0usize, 0usize, 0usize]; let mut nodes_proto_v6 = [0usize, 0usize, 0usize, 0usize];
let filter = Box::new( let filter = Box::new(
move |rti: &RoutingTableInner, entry: Option<Arc<BucketEntry>>| { move |rti: &RoutingTableInner, entry: Option<Arc<BucketEntry>>| {

View File

@ -85,7 +85,7 @@ pub trait NodeRefBase: Sized {
self.common() self.common()
.filter .filter
.as_ref() .as_ref()
.map(|f| f.dial_info_filter.clone()) .map(|f| f.dial_info_filter)
.unwrap_or(DialInfoFilter::all()) .unwrap_or(DialInfoFilter::all())
} }
@ -244,7 +244,7 @@ pub trait NodeRefBase: Sized {
}) })
} }
fn all_filtered_dial_info_details<F>(&self) -> Vec<DialInfoDetail> { fn all_filtered_dial_info_details(&self) -> Vec<DialInfoDetail> {
let routing_domain_set = self.routing_domain_set(); let routing_domain_set = self.routing_domain_set();
let dial_info_filter = self.dial_info_filter(); let dial_info_filter = self.dial_info_filter();
@ -283,7 +283,7 @@ pub trait NodeRefBase: Sized {
self.operate(|rti, e| { self.operate(|rti, e| {
// apply sequencing to filter and get sort // apply sequencing to filter and get sort
let sequencing = self.common().sequencing; let sequencing = self.common().sequencing;
let filter = self.common().filter.clone().unwrap_or_default(); let filter = self.common().filter.unwrap_or_default();
let (ordered, filter) = filter.with_sequencing(sequencing); let (ordered, filter) = filter.with_sequencing(sequencing);
let mut last_connections = e.last_connections(rti, true, filter); let mut last_connections = e.last_connections(rti, true, filter);
@ -444,7 +444,7 @@ impl Clone for NodeRef {
common: NodeRefBaseCommon { common: NodeRefBaseCommon {
routing_table: self.common.routing_table.clone(), routing_table: self.common.routing_table.clone(),
entry: self.common.entry.clone(), entry: self.common.entry.clone(),
filter: self.common.filter.clone(), filter: self.common.filter,
sequencing: self.common.sequencing, sequencing: self.common.sequencing,
#[cfg(feature = "tracking")] #[cfg(feature = "tracking")]
track_id: self.common.entry.write().track(), track_id: self.common.entry.write().track(),

View File

@ -18,7 +18,7 @@ pub enum RouteNode {
/// Route node is optimized, no contact method information as this node id has been seen before /// Route node is optimized, no contact method information as this node id has been seen before
NodeId(PublicKey), NodeId(PublicKey),
/// Route node with full contact method information to ensure the peer is reachable /// Route node with full contact method information to ensure the peer is reachable
PeerInfo(PeerInfo), PeerInfo(Box<PeerInfo>),
} }
impl RouteNode { impl RouteNode {
@ -41,7 +41,7 @@ impl RouteNode {
Ok(nr) => nr, Ok(nr) => nr,
Err(e) => { Err(e) => {
log_rtab!(debug "failed to look up route node: {}", e); log_rtab!(debug "failed to look up route node: {}", e);
return None; None
} }
} }
} }
@ -49,13 +49,13 @@ impl RouteNode {
// //
match routing_table.register_node_with_peer_info( match routing_table.register_node_with_peer_info(
RoutingDomain::PublicInternet, RoutingDomain::PublicInternet,
pi.clone(), *pi.clone(),
false, false,
) { ) {
Ok(nr) => Some(nr), Ok(nr) => Some(nr),
Err(e) => { Err(e) => {
log_rtab!(debug "failed to register route node: {}", e); log_rtab!(debug "failed to register route node: {}", e);
return None; None
} }
} }
} }
@ -95,7 +95,7 @@ impl RouteHop {
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
pub enum PrivateRouteHops { pub enum PrivateRouteHops {
/// The first hop of a private route, unencrypted, route_hops == total hop count /// The first hop of a private route, unencrypted, route_hops == total hop count
FirstHop(RouteHop), FirstHop(Box<RouteHop>),
/// Private route internal node. Has > 0 private route hops left but < total hop count /// Private route internal node. Has > 0 private route hops left but < total hop count
Data(RouteHopData), Data(RouteHopData),
/// Private route has ended (hop count = 0) /// Private route has ended (hop count = 0)
@ -134,10 +134,10 @@ impl PrivateRoute {
Self { Self {
public_key, public_key,
hop_count: 1, hop_count: 1,
hops: PrivateRouteHops::FirstHop(RouteHop { hops: PrivateRouteHops::FirstHop(Box::new(RouteHop {
node, node,
next_hop: None, next_hop: None,
}), })),
} }
} }
@ -177,10 +177,10 @@ impl PrivateRoute {
None => PrivateRouteHops::Empty, None => PrivateRouteHops::Empty,
}; };
return Some(first_hop_node); Some(first_hop_node)
} }
PrivateRouteHops::Data(_) => return None, PrivateRouteHops::Data(_) => None,
PrivateRouteHops::Empty => return None, PrivateRouteHops::Empty => None,
} }
} }

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,7 @@ fn _get_route_permutation_count(hop_count: usize) -> usize {
// more than two nodes has factorial permutation // more than two nodes has factorial permutation
// hop_count = 3 -> 2! -> 2 // hop_count = 3 -> 2! -> 2
// hop_count = 4 -> 3! -> 6 // hop_count = 4 -> 3! -> 6
(3..hop_count).into_iter().fold(2usize, |acc, x| acc * x) (3..hop_count).fold(2usize, |acc, x| acc * x)
} }
pub type PermReturnType = (Vec<usize>, bool); pub type PermReturnType = (Vec<usize>, bool);
pub type PermFunc<'t> = Box<dyn FnMut(&[usize]) -> Option<PermReturnType> + Send + 't>; pub type PermFunc<'t> = Box<dyn FnMut(&[usize]) -> Option<PermReturnType> + Send + 't>;
@ -47,7 +47,7 @@ pub fn with_route_permutations(
f: &mut PermFunc, f: &mut PermFunc,
) -> Option<PermReturnType> { ) -> Option<PermReturnType> {
if size == 1 { if size == 1 {
return f(&permutation); return f(permutation);
} }
for i in 0..size { for i in 0..size {

View File

@ -112,7 +112,7 @@ impl RouteSetSpecDetail {
} }
pub fn contains_nodes(&self, nodes: &[TypedKey]) -> bool { pub fn contains_nodes(&self, nodes: &[TypedKey]) -> bool {
for tk in nodes { for tk in nodes {
for (_pk, rsd) in &self.route_set { for rsd in self.route_set.values() {
if rsd.crypto_kind == tk.kind && rsd.hops.contains(&tk.value) { if rsd.crypto_kind == tk.kind && rsd.hops.contains(&tk.value) {
return true; return true;
} }

Some files were not shown because too many files have changed in this diff Show More