About salsa
Salsa is a Rust framework for writing incremental, on-demand programs -- these are programs that want to adapt to changes in their inputs, continuously producing a new output that is up-to-date. Salsa is based on the the incremental recompilation techniques that we built for rustc, and many (but not all) of its users are building compilers or other similar tooling.
If you'd like to learn more about Salsa, check out:
- The overview, for a brief summary.
- The tutorial, for a detailed look.
- You can also watch some of our videos, though the content there is rather out of date.
If you'd like to chat about Salsa, or you think you might like to contribute, please jump on to our Zulip instance at salsa.zulipchat.com.
Salsa overview
⚠️ IN-PROGRESS VERSION OF SALSA. ⚠️
This page describes the unreleased "Salsa 2022" version, which is a major departure from older versions of salsa. The code here works but is only available on github and from the
salsa-2022
crate.
This page contains a brief overview of the pieces of a salsa program. For a more detailed look, check out the tutorial, which walks through the creation of an entire project end-to-end.
Goal of Salsa
The goal of salsa is to support efficient incremental recomputation. salsa is used in rust-analyzer, for example, to help it recompile your program quickly as you type.
The basic idea of a salsa program is like this:
#![allow(unused)] fn main() { let mut input = ...; loop { let output = your_program(&input); modify(&mut input); } }
You start out with an input that has some value. You invoke your program to get back a result. Some time later, you modify the input and invoke your program again. Our goal is to make this second call faster by re-using some of the results from the first call.
In reality, of course, you can have many inputs and "your program" may be many different methods and functions defined on those inputs. But this picture still conveys a few important concepts:
- Salsa separates out the "incremental computation" (the function
your_program
) from some outer loop that is defining the inputs. - Salsa gives you the tools to define
your_program
. - Salsa assumes that
your_program
is a purely deterministic function of its inputs, or else this whole setup makes no sense. - The mutation of inputs always happens outside of
your_program
, as part of this master loop.
Database
Each time you run your program, salsa remembers the values of each computation in a database. When the inputs change, it consults this database to look for values that can be reused. The database is also used to implement interning (making a canonical version of a value that can be copied around and cheaply compared for equality) and other convenient salsa features.
Inputs
Every Salsa program begins with an input. Inputs are special structs that define the starting point of your program. Everything else in your program is ultimately a deterministic function of these inputs.
For example, in a compiler, there might be an input defining the contents of a file on disk:
#![allow(unused)] fn main() { #[salsa::input] pub struct ProgramFile { pub path: PathBuf, pub contents: String, } }
You create an input by using the new
method.
Because the values of input fields are stored in the database, you also give an &mut
-reference to the database:
#![allow(unused)] fn main() { let file: ProgramFile = ProgramFile::new( &mut db, PathBuf::from("some_path.txt"), String::from("fn foo() { }"), ); }
Salsa structs are just an integer
The ProgramFile
struct generates by the salsa::input
macro doesn't actually store any data. It's just a newtyped integer id:
#![allow(unused)] fn main() { // Generated by the `#[salsa::input]` macro: #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct ProgramFile(salsa::Id); }
This means that, when you have a ProgramFile
, you can easily copy it around and put it wherever you like.
To actually read any of its fields, however, you will need to use the database and a getter method.
Reading fields and return_ref
You can access the value of an input's fields by using the getter method.
As this is only reading the field, it just needs a &
-reference to the database:
#![allow(unused)] fn main() { let contents: String = file.contents(&db); }
Invoking the accessor clones the value from the database.
Sometimes this is not what you want, so you can annotate fields with #[return_ref]
to indicate that they should return a reference into the database instead:
#![allow(unused)] fn main() { #[salsa::input] pub struct ProgramFile { pub path: PathBuf, #[return_ref] pub contents: String, } }
Now file.contents(&db)
will return an &String
.
You can also use the data
method to access the entire struct:
#![allow(unused)] fn main() { file.data(&db) }
Writing input fields
Finally, you can also modify the value of an input field by using the setter method.
Since this is modifying the input, the setter takes an &mut
-reference to the database:
#![allow(unused)] fn main() { file.set_contents(String::from("fn foo() { /* add a comment */ }")); }
Tracked functions
Once you've defined your inputs, the next thing to define are tracked functions:
#![allow(unused)] fn main() { #[salsa::tracked] fn parse_file(db: &dyn crate::Db, file: ProgramFile) -> Ast { let contents: &str = file.contents(db); ... } }
When you call a tracked function, salsa will track which inputs it accesses (in this example, file.contents(db)
).
It will also memoize the return value (the Ast
, in this case).
If you call a tracked function twice, salsa checks if the inputs have changed; if not, it can return the memoized value.
The algorithm salsa uses to decide when a tracked function needs to be re-executed is called the red-green algorithm, and it's where the name salsa comes from.
Tracked functions have to follow a particular structure:
- They must take a
&
-reference to the database as their first argument.- Note that because this is an
&
-reference, it is not possible to create or modify inputs during a tracked function!
- Note that because this is an
- They must take a "salsa struct" as the second argument -- in our example, this is an input struct, but there are other kinds of salsa structs we'll describe shortly.
- They can take additional arguments, but it's faster and better if they don't.
Tracked functions can return any clone-able type. A clone is required since, when the value is cached, the result will be cloned out of the database. Tracked functions can also be annotated with #[return_ref]
if you would prefer to return a reference into the database instead (if parse_file
were so annotated, then callers would actually get back an &Ast
, for example).
Tracked structs
Tracked structs are intermediate structs created during your computation. Like inputs, their fields are stored inside the database, and the struct itself just wraps an id. Unlike inputs, they can only be created inside a tracked function, and their fields can never change once they are created. Getter methods are provided to read the fields, but there are no setter methods1. Example:
#![allow(unused)] fn main() { #[salsa::tracked] struct Ast { #[return_ref] top_level_items: Vec<Item>, } }
Just as with an input, new values are created by invoking Ast::new
.
Unlike with an input, the new
for a tracked struct only requires a &
-reference to the database:
#![allow(unused)] fn main() { #[salsa::tracked] fn parse_file(db: &dyn crate::Db, file: ProgramFile) -> Ast { let contents: &str = file.contents(db); let parser = Parser::new(contents); let mut top_level_items = vec![]; while let Some(item) = parser.parse_top_level_item() { top_level_items.push(item); } Ast::new(db, top_level_items) // <-- create an Ast! } }
#[id]
fields
When a tracked function is re-executed because its inputs have changed, the tracked structs it creates in the new execution are matched against those from the old execution, and the values of their fields are compared. If the field values have not changed, then other tracked functions that only read those fields will not be re-executed.
Normally, tracked structs are matched up by the order in which they are created.
For example, the first Ast
that is created by parse_file
in the old execution will be matched against the first Ast
created by parse_file
in the new execution.
In our example, parse_file
only ever creates a single Ast
, so this works great.
Sometimes, however, it doesn't work so well.
For example, imagine that we had a tracked struct for items in the file:
#![allow(unused)] fn main() { #[salsa::tracked] struct Item { name: Word, // we'll define Word in a second! ... } }
Maybe our parser first creates an Item
with the name foo
and then later a second Item
with the name bar
.
Then the user changes the input to reorder the functions.
Although we are still creating the same number of items, we are now creating them in the reverse order, so the naive algorithm will match up the old foo
struct with the new bar
struct.
This will look to salsa as though the foo
function was renamed to bar
and the bar
function was renamed to foo
.
We'll still get the right result, but we might do more recomputation than we needed to do if we understood that they were just reordered.
To address this, you can tag fields in a tracked struct as #[id]
. These fields are then used to "match up" struct instances across executions:
#![allow(unused)] fn main() { #[salsa::tracked] struct Item { #[id] name: Word, // we'll define Word in a second! ... } }
Specified the result of tracked functions for particular structs
Sometimes it is useful to define a tracked function but specify its value for some particular struct specially. For example, maybe the default way to compute the representation for a function is to read the AST, but you also have some built-in functions in your language and you want to hard-code their results. This can also be used to simulate a field that is initialized after the tracked struct is created.
To support this use case, you can use the specify
method associated with tracked functions.
To enable this method, you need to add the specify
flag to the function to alert users that its value may sometimes be specified externally.
#![allow(unused)] fn main() { #[salsa::tracked(specify)] // <-- specify flag required fn representation(db: &dyn crate::Db, item: Item) -> Representation { // read the user's input AST by default let ast = ast(db, item); // ... } fn create_builtin_item(db: &dyn crate::Db) -> Item { let i = Item::new(db, ...); let r = hardcoded_representation(); representation::specify(db, i, r); // <-- use the method! i } }
Specifying is only possible for tracked functions that take a single tracked struct as argument (besides the database).
Interned structs
The final kind of salsa struct are interned structs. Interned structs are useful for quick equality comparison. They are commonly used to represent strings or other primitive values.
Most compilers, for example, will define a type to represent a user identifier:
#![allow(unused)] fn main() { #[salsa::interned] struct Word { #[return_ref] pub text: String, } }
As with input and tracked structs, the Word
struct itself is just a newtyped integer, and the actual data is stored in the database.
You can create a new interned struct using new
, just like with input and tracked structs:
#![allow(unused)] fn main() { let w1 = Word::new(db, "foo".to_string()); let w2 = Word::new(db, "bar".to_string()); let w3 = Word::new(db, "foo".to_string()); }
When you create two interned structs with the same field values, you are guaranted to get back the same integer id. So here, we know that assert_eq!(w1, w3)
is true and assert_ne!(w1, w2)
.
You can access the fields of an interned struct using a getter, like word.text(db)
. These getters respect the #[return_ref]
annotation. Like tracked structs, the fields of interned structs are immutable.
Accumulators
The final salsa concept are accumulators. Accumulators are a way to report errors or other "side channel" information that is separate from the main return value of your function.
To create an accumulator, you declare a type as an accumulator:
#![allow(unused)] fn main() { #[salsa::accumulator] pub struct Diagnostics(String); }
It must be a newtype of something, like String
. Now, during a tracked function's execution, you can push those values:
#![allow(unused)] fn main() { Diagnostics::push(db, "some_string".to_string()) }
Then later, from outside the execution, you can ask for the set of diagnostics that were accumulated by some particular tracked function. For example, imagine that we have a type-checker and, during type-checking, it reports some diagnostics:
#![allow(unused)] fn main() { #[salsa::tracked] fn type_check(db: &dyn Db, item: Item) { // ... Diagnostics::push(db, "some error message".to_string()) // ... } }
we can then later invoke the associated accumulated
function to get all the String
values that were pushed:
#![allow(unused)] fn main() { let v: Vec<String> = type_check::accumulated::<Diagnostics>(db); }
Tutorial: calc
⚠️ IN-PROGRESS VERSION OF SALSA. ⚠️
This page describes the unreleased "Salsa 2022" version, which is a major departure from older versions of salsa. The code here works but is only available on github and from the
salsa-2022
crate.
This tutorial walks through an end-to-end example of using Salsa. It does not assume you know anything about salsa, but reading the overview first is probably a good idea to get familiar with the basic concepts.
Our goal is define a compiler/interpreter for a simple language called calc
.
The calc
compiler takes programs like the following and then parses and executes them:
fn area_rectangle(w, h) = w * h
fn area_circle(r) = 3.14 * r * r
print area_rectangle(3, 4)
print area_circle(1)
print 11 * 2
When executed, this program prints 12
, 3.14
, and 22
.
If the program contains errors (e.g., a reference to an undefined function), it prints those out too. And, of course, it will be reactive, so small changes to the input don't require recompiling (or rexecuting, necessarily) the entire thing.
Basic structure
Before we do anything with salsa, let's talk about the basic structure of the calc compiler. Part of salsa's design is that you are able to write programs that feel 'pretty close' to what a natural Rust program looks like.
Example program
This is our example calc program:
x = 5
y = 10
z = x + y * 3
print z
Parser
The calc compiler takes as input a program, represented by a string:
#![allow(unused)] fn main() { struct ProgramSource { text: String } }
The first thing it does it to parse that string into a series of statements that look something like the following pseudo-Rust:1
#![allow(unused)] fn main() { enum Statement { /// Defines `fn <name>(<args>) = <body>` Function(Function), /// Defines `print <expr>` Print(Expression), } /// Defines `fn <name>(<args>) = <body>` struct Function { name: FunctionId, args: Vec<VariableId>, body: Expression } }
where an expression is something like this (pseudo-Rust, because the Expression
enum is recursive):
#![allow(unused)] fn main() { enum Expression { Op(Expression, Op, Expression), Number(f64), Variable(VariableId), Call(FunctionId, Vec<Expression>), } enum Op { Add, Subtract, Multiply, Divide, } }
Finally, for function/variable names, the FunctionId
and VariableId
types will be interned strings:
#![allow(unused)] fn main() { type FunctionId = /* interned string */; type VariableId = /* interned string */; }
Because calc is so simple, we don't have to bother separating out the lexer from the parser.
Checker
The "checker" has the job of ensuring that the user only references variables that have been defined. We're going to write the checker in a "context-less" style, which is a bit less intuitive but allows for more incremental re-use. The idea is to compute, for a given expression, which variables it references. Then there is a function "check" which ensures that those variables are a subset of those that are already defined.
Interpreter
The interpreter will execute the program and print the result. We don't bother with much incremental re-use here, though it's certainly possible.
Jars and databases
Before we can define the interesting parts of our salsa program, we have to setup a bit of structure that defines the salsa database. The database is a struct that ultimately stores all of salsa's intermediate state, such as the memoized return values from tracked functions.
The database itself is defined in terms of intermediate structures, called jars1, which themselves contain the data for each function. This setup allows salsa programs to be divided amongst many crates. Typically, you define one jar struct per crate, and then when you construct the final database, you simply list the jar structs. This permits the crates to define private functions and other things that are members of the jar struct, but not known directly to the database.
Jars of salsa -- get it? Get it??2
OK, maybe it also brings to mind Java .jar
files, but there's no real relationship. A jar is just a Rust struct, not a packaging format.
Defining a jar struct
To define a jar struct, you create a tuple struct with the #[salsa::jar]
annotation:
#![allow(unused)] fn main() { #[salsa::jar(db = Db)] pub struct Jar( crate::ir::SourceProgram, crate::ir::VariableId, crate::ir::FunctionId, crate::ir::Expression, crate::ir::Statement, crate::ir::Function, crate::ir::Diagnostics, crate::parser::parse_statements, ); }
Although it's not required, it's highly recommended to put the jar
struct at the root of your crate, so that it can be referred to as crate::Jar
.
All of the other salsa annotations reference a jar struct, and they all default to the path crate::Jar
.
If you put the jar somewhere else, you will have to override that default.
Defining the database trait
The #[salsa::jar]
annotation also includes a db = Db
field.
The value of this field (normally Db
) is the name of a trait that represents the database.
Salsa programs never refer directly to the database; instead, they take a &dyn Db
argument.
This allows for separate compilation, where you have a database that contains the data for two jars, but those jars don't depend on one another.
The database trait for our calc
crate is very simple:
#![allow(unused)] fn main() { pub trait Db: salsa::DbWithJar<Jar> {} }
When you define a database trait like Db
, the one thing that is required is that it must have a supertrait salsa::DbWithJar<Jar>
,
where Jar
is the jar struct. If your jar depends on other jars, you can have multiple such supertraits (e.g., salsa::DbWithJar<other_crate::Jar>
).
Typically the Db
trait has no other members or supertraits, but you are also free to add whatever other things you want in the trait.
When you define your final database, it will implement the trait, and you can then define the implementation of those other things.
This allows you to create a way for your jar to request context or other info from the database that is not moderated through salsa,
should you need that.
Implementing the database trait for the jar
The Db
trait must be implemented by the database struct.
We're going to define the database struct in a later section,
and one option would be to simply implement the jar Db
trait there.
However, since we don't define any custom logic in the trait,
a common choice is to write a blanket impl for any type that implements DbWithJar<Jar>
,
and that's what we do here:
#![allow(unused)] fn main() { impl<DB> Db for DB where DB: ?Sized + salsa::DbWithJar<Jar> {} }
Summary
If the concept of a jar seems a bit abstract to you, don't overthink it. The TL;DR is that when you create a salsa program, you need to do:
- In each of your crates:
- Define a
#[salsa::jar(db = Db)]
struct, typically atcrate::Jar
, and list each of your various salsa-annotated things inside of it. - Define a
Db
trait, typically atcrate::Db
, that you will use in memoized functions and elsewhere to refer to the database struct.
- Define a
- Once, typically in your final crate:
- Define a database
D
, as described in the next section, that will contain a list of each of the jars for each of your crates. - Implement the
Db
traits for each jar for your database typeD
(often we do this through blanket impls in the jar crates).
- Define a database
Defining the database struct
Now that we have defined a jar, we need to create the database struct. The database struct is where all the jars come together. Typically it is only used by the "driver" of your application; the one which starts up the program, supplies the inputs, and relays the outputs.
In calc
, the database struct is in the db
module, and it looks like this:
#![allow(unused)] fn main() { #[salsa::db(crate::Jar)] pub(crate) struct Database { storage: salsa::Storage<Self>, } }
The #[salsa::db(...)]
attribute takes a list of all the jars to include.
The struct must have a field named storage
whose types is salsa::Storage<Self>
, but it can also contain whatever other fields you want.
The storage
struct owns all the data for the jars listed in the db
attribute.
The salsa::db
attribute autogenerates a bunch of impls for things like the salsa::HasJar<crate::Jar>
trait that we saw earlier.
This means that
Implementing the salsa::Database
trait
In addition to the struct itself, we must add an impl of salsa::Database
:
#![allow(unused)] fn main() { impl salsa::Database for Database { fn salsa_runtime(&self) -> &salsa::Runtime { self.storage.runtime() } } }
Impementing the salsa::ParallelDatabase
trait
If you want to permit accessing your database from multiple threads at once, then you also need to implement the ParallelDatabase
trait:
#![allow(unused)] fn main() { impl salsa::ParallelDatabase for Database { fn snapshot(&self) -> salsa::Snapshot<Self> { salsa::Snapshot::new(Database { storage: self.storage.snapshot(), }) } } }
Implementing the Default
trait
It's not required, but implementing the Default
trait is often a convenient way to let users instantiate your database:
#![allow(unused)] fn main() { impl Default for Database { fn default() -> Self { Self { storage: Default::default(), } } } }
Implementing the traits for each Jar
The Database
struct also needs to implement the database traits for each jar.
In our case, though, we already wrote that impl as a blanket impl alongside the jar itself,
so no action is needed.
This is the recommended strategy unless your trait has custom members that depend on fields of the Database
itself
(for example, sometimes the Database
holds some kind of custom resource that you want to give access to).
Defining the IR
Before we can define the parser, we need to define the intermediate representation (IR) that we will use for calc
programs.
In the basic structure, we defined some "pseudo-Rust" structures like Statement
and Expression
;
now we are going to define them for real.
"Salsa structs"
In addition to regular Rust types, we will make use of various salsa structs. A salsa struct is a struct that has been annotated with one of the salsa annotations:
#[salsa::input]
, which designates the "base inputs" to your computation;#[salsa::tracked]
, which designate intermediate values created during your computation;#[salsa::interned]
, which designate small values that are easy to compare for equality.
All salsa structs store the actual values of their fields in the salsa database. This permits us to track when the values of those fields change to figure out what work will need to be re-executed.
When you annotate a struct with one of the above salsa attributes, salsa actually generates a bunch of code to link that struct into the database.
This code must be connected to some jar.
By default, this is crate::Jar
, but you can specify a different jar with the jar=
attribute (e.g., #[salsa::input(jar = MyJar)]
).
You must also list the struct in the jar definition itself, or you will get errors.
Input structs
The first thing we will define is our input. Every salsa program has some basic inputs that drive the rest of the computation. The rest of the program must be some deterministic function of those base inputs, such that when those inputs change, we can try to efficiently recompute the new result of that function.
Inputs are defined as Rust structs with a #[salsa::input]
annotation:
#![allow(unused)] fn main() { #[salsa::input] pub struct SourceProgram { #[return_ref] text: String, } }
In our compiler, we have just one simple input, the ProgramSource
, which has a text
field (the string).
The data lives in the database
Although they are declared like other Rust structs, salsa structs are implemented quite differently.
The values of their fields are stored in the salsa database, and the struct itself just contains a numeric identifier.
This means that the struct instances are copy (no matter what fields they contain).
Creating instances of the struct and accessing fields is done by invoking methods like new
as well as getters and setters.
More concretely, the #[salsa::input]
annotation will generate a struct for ProgramSource
like this:
#![allow(unused)] fn main() { #[define(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct ProgramSource(salsa::Id); }
It will also generate a method new
that lets you create a ProgramSource
in the database.
For an input, a &mut db
reference is required, along with the values for each field:
#![allow(unused)] fn main() { let source = ProgramSource::new(&mut db, "print 11 + 11".to_string()); }
You can read the value of the field with source.text(&db)
,
and you can set the value of the field with source.set_text(&mut db, "print 11 * 2".to_string())
.
Database revisions
Whenever a function takes an &mut
reference to the database,
that means that it can only be invoked from outside the incrementalized part of your program,
as explained in the overview.
When you change the value of an input field, that increments a 'revision counter' in the database,
indicating that some inputs are different now.
When we talk about a "revision" of the database, we are referring to the state of the database in between changes to the input values.
Tracked structs
Next we will define a tracked struct to represent the functions in our input.
Whereas inputs represent the start of a computation, tracked structs represent intermediate values created during your computation.
In this case, we are going to parse the raw input program, and create a Function
for each of the functions defined by the user.
#![allow(unused)] fn main() { #[salsa::tracked] pub struct Function { #[id] name: FunctionId, args: Vec<VariableId>, body: Expression, } }
Unlike with inputs, the fields of tracked structs are immutable once created. Otherwise, working with a tracked struct is quite similar to an input:
- You can create a new value by using
new
, but with a tracked struct, you only need an&dyn
database, not&mut
(e.g.,Function::new(&db, some_name, some_args, some_body)
) - You use a getter to read the value of a field, just like with an input (e.g.,
my_func.args(db)
to read theargs
field).
id fields
To get better reuse across revisions, particularly when things are reordered, you can mark some entity fields with #[id]
.
Normally, you would do this on fields that represent the "name" of an entity.
This indicates that, across two revisions R1 and R2, if two functions are created with the same name, they refer to the same entity, so we can compare their other fields for equality to determine what needs to be re-executed.
Adding #[id]
attributes is an optimization and never affects correctness.
For more details, see the algorithm page of the reference.
Interned structs
The final kind of salsa struct are interned structs. As with input and tracked structs, the data for an interned struct is stored in the database, and you just pass around a single integer. Unlike those structs, if you intern the same data twice, you get back the same integer.
A classic use of interning is for small strings like function names and variables.
It's annoying and inefficient to pass around those names with String
values which must be cloned;
it's also inefficient to have to compare them for equality via string comparison.
Therefore, we define two interned structs, FunctionId
and VariableId
, each with a single field that stores the string:
#![allow(unused)] fn main() { #[salsa::interned] pub struct VariableId { #[return_ref] pub text: String, } #[salsa::interned] pub struct FunctionId { #[return_ref] pub text: String, } }
When you invoke e.g. FunctionId::new(&db, "my_string".to_string())
, you will get back a FunctionId
that is just a newtype'd integer.
But if you invoke the same call to new
again, you get back the same integer:
#![allow(unused)] fn main() { let f1 = FunctionId::new(&db, "my_string".to_string()); let f2 = FunctionId::new(&db, "my_string".to_string()); assert_eq!(f1, f2); }
Expressions and statements
We'll also intern expressions and statements. This is convenient primarily because it allows us to have recursive structures very easily. Since we don't really need the "cheap equality comparison" aspect of interning, this isn't the most efficient choice, and many compilers would opt to represent expressions/statements in some other way.
#![allow(unused)] fn main() { #[salsa::interned] pub struct Statement { data: StatementData, } #[derive(Eq, PartialEq, Clone, Hash)] pub enum StatementData { /// Defines `fn <name>(<args>) = <body>` Function(Function), /// Defines `print <expr>` Print(Expression), } #[salsa::interned] pub struct Expression { #[return_ref] data: ExpressionData, } #[derive(Eq, PartialEq, Clone, Hash)] pub enum ExpressionData { Op(Expression, Op, Expression), Number(OrderedFloat<f64>), Variable(VariableId), Call(FunctionId, Vec<Expression>), } #[derive(Eq, PartialEq, Copy, Clone, Hash, Debug)] pub enum Op { Add, Subtract, Multiply, Divide, } }
Interned ids are guaranteed to be consistent within a revision, but not across revisions (but you don't have to care)
Interned ids are guaranteed not to change within a single revision, so you can intern things from all over your program and get back consistent results. When you change the inputs, however, salsa may opt to clear some of the interned values and choose different integers. However, if this happens, it will also be sure to re-execute every function that interned that value, so all of them still see a consistent value, just a different one than they saw in a previous revision.
In other words, within a salsa computation, you can assume that interning produces a single consistent integer, and you don't have to think about it. If however you export interned identifiers outside the computation, and then change the inputs, they may not longer be valid or may refer to different values.
Defining the parser: memoized functions and inputs
The next step in the calc
compiler is to define the parser.
The role of the parser will be to take the ProgramSource
input,
read the string from the text
field,
and create the Statement
, Function
, and Expression
structures that we defined in the ir
module.
To minimize dependencies, we are going to write a recursive descent parser. Another option would be to use a Rust parsing framework. We won't cover the parsing itself in this tutorial -- you can read the code if you want to see how it works. We're going to focus only on the salsa-related aspects.
The parse_statements
function
The starting point for the parser is the parse_statements
function:
#![allow(unused)] fn main() { #[salsa::tracked(return_ref)] pub fn parse_statements(db: &dyn crate::Db, source: SourceProgram) -> Vec<Statement> { // Get the source text from the database let source_text = source.text(db); // Create the parser let mut parser = Parser { db, source_text, position: 0, }; // Read in statements until we reach the end of the input let mut result = vec![]; loop { // Skip over any whitespace parser.skip_whitespace(); // If there are no more tokens, break if let None = parser.peek() { break; } // Otherwise, there is more input, so parse a statement. if let Some(statement) = parser.parse_statement() { result.push(statement); } else { // If we failed, report an error at whatever position the parser // got stuck. We could recover here by skipping to the end of the line // or something like that. But we leave that as an exercise for the reader! parser.report_error(); break; } } result } }
This function is annotated as #[salsa::tracked]
.
That means that, when it is called, salsa will track what inputs it reads as well as what value it returns.
The return value is memoized,
which means that if you call this function again without changing the inputs,
salsa will just clone the result rather than re-execute it.
Tracked functions are the unit of reuse
Tracked functions are the core part of how salsa enables incremental reuse.
The goal of the framework is to avoid re-executing tracked functions and instead to clone their result.
Salsa uses the red-green algorithm to decide when to re-execute a function.
The short version is that a tracked function is re-executed if either (a) it directly reads an input, and that input has changed
or (b) it directly invokes another tracked function, and that function's return value has changed.
In the case of parse_statements
, it directly reads ProgramSource::text
, so if the text changes, then the parser will re-execute.
By choosing which functions to mark as #[tracked]
, you control how much reuse you get.
In our case, we're opting to mark the outermost parsing function as tracked, but not the inner ones.
This means that if the input changes, we will always re-parse the entire input and re-create the resulting statements and so forth.
We'll see later that this doesn't mean we will always re-run the type checker and other parts of the compiler.
This trade-off makes sense because (a) parsing is very cheap, so the overhead of tracking and enabling finer-grained reuse doesn't pay off and because (b) since strings are just a big blob-o-bytes without any structure, it's rather hard to identify which parts of the IR need to be reparsed. Some systems do choose to do more granular reparsing, often by doing a "first pass" over the string to give it a bit of structure, e.g. to identify the functions, but deferring the parsing of the body of each function until later. Setting up a scheme like this is relatively easy in salsa, and uses the same principles that we will use later to avoid re-executing the type checker.
Parameters to a tracked function
The first parameter to a tracked function is always the database, db: &dyn crate::Db
.
It must be a dyn
value of whatever database is associated with the jar.
The second parameter to a tracked function is always some kind of salsa struct.
The first parameter to a memoized function is always the database,
which should be a dyn Trait
value for the database trait associated with the jar
(the default jar is crate::Jar
).
Tracked functions may take other arguments as well, though our examples here do not. Functions that take additional arguments are less efficient and flexible. It's generally better to structure tracked functions as functions of a single salsa struct if possible.
The return_ref
annotation
You may have noticed that parse_statements
is tagged with #[salsa::tracked(return_ref)]
.
Ordinarily, when you call a tracked function, the result you get back is cloned out of the database.
The return_ref
attribute means that a reference into the database is returned instead.
So, when called, parse_statements
will return an &Vec<Statement>
rather than cloning the Vec
.
This is useful as a performance optimization.
(You may recall the return_ref
annotation from the ir section of the tutorial,
where it was placed on struct fields, with roughly the same meaning.)
Defining the parser: reporting errors
The last interesting case in the parser is how to handle a parse error.
Because salsa functions are memoized and may not execute, they should not have side-effects,
so we don't just want to call eprintln!
.
If we did so, the error would only be reported the first time the function was called.
Salsa defines a mechanism for managing this called an accumulator.
In our case, we define an accumulator struct called Diagnostics
in the ir
module:
#![allow(unused)] fn main() { #[salsa::accumulator] pub struct Diagnostics(Diagnostic); #[derive(Clone, Debug)] pub struct Diagnostic { pub position: usize, pub message: String, } }
Accumulator structs are always newtype structs with a single field, in this case of type Diagnostic
.
Memoized functions can push Diagnostic
values onto the accumulator.
Later, you can invoke a method to find all the values that were pushed by the memoized functions
or any function that it called
(e.g., we could get the set of Diagnostic
values produced by the parse_statements
function).
The Parser::report_error
method contains an example of pushing a diagnostic:
#![allow(unused)] fn main() { /// Report an error diagnostic at the current position. fn report_error(&self) { Diagnostics::push( self.db, Diagnostic { position: self.position, message: "unexpected character".to_string(), }, ); } }
To get the set of diagnostics produced by parse_errors
, or any other memoized function,
we invoke the associated accumulated
function:
#![allow(unused)] fn main() { let accumulated: Vec<Diagnostic> = parse_statements::accumulated::<Diagnostics>(db); // ----------- // Use turbofish to specify // the diagnostics type. }
accumulated
takes the database db
as argument and returns a Vec
.
Defining the parser: debug impls and testing
As the final part of the parser, we need to write some tests.
To do so, we will create a database, set the input source text, run the parser, and check the result.
Before we can do that, though, we have to address one question: how do we inspect the value of an interned type like Expression
?
The DebugWithDb
trait
Because an interned type like Expression
just stores an integer, the traditional Debug
trait is not very useful.
To properly print a Expression
, you need to access the salsa database to find out what its value is.
To solve this, salsa
provides a DebugWithDb
trait that acts like the regular Debug
, but takes a database as argument.
For types that implement this trait, you can invoke the debug
method.
This returns a temporary that implements the ordinary Debug
trait, allowing you to write something like
#![allow(unused)] fn main() { eprintln!("Expression = {:?}", expr.debug(db)); }
and get back the output you expect.
Implementing the DebugWithDb
trait
For now, unfortunately, you have to implement the DebugWithDb
trait manually, as we do not provide a derive.
This is tedious but not difficult. Here is an example of implementing the trait for Expression
:
#![allow(unused)] fn main() { impl DebugWithDb<dyn crate::Db + '_> for Expression { fn fmt(&self, f: &mut std::fmt::Formatter<'_>, db: &dyn crate::Db) -> std::fmt::Result { match self.data(db) { ExpressionData::Op(a, b, c) => f .debug_tuple("ExpressionData::Op") .field(&a.debug(db)) // use `a.debug(db)` for interned things .field(&b.debug(db)) .field(&c.debug(db)) .finish(), ExpressionData::Number(a) => { f.debug_tuple("Number") .field(a) // use just `a` otherwise .finish() } ExpressionData::Variable(a) => f.debug_tuple("Variable").field(&a.debug(db)).finish(), ExpressionData::Call(a, b) => f .debug_tuple("Call") .field(&a.debug(db)) .field(&b.debug(db)) .finish(), } } } }
Some things to note:
- The
data
method gives access to the full enum from the database. - The
Formatter
methods (e.g.,debug_tuple
) can be used to provide consistent output. - When printing the value of a field, use
.field(&a.debug(db))
for fields that are themselves interned or entities, and use.field(&a)
for fields that just implement the ordinaryDebug
trait.
Forwarding to the ordinary Debug
trait
For consistency, it is sometimes useful to have a DebugWithDb
implementation even for types, like Op
, that are just ordinary enums. You can do that like so:
#![allow(unused)] fn main() { impl DebugWithDb<dyn crate::Db + '_> for Op { fn fmt(&self, f: &mut std::fmt::Formatter<'_>, _db: &dyn crate::Db) -> std::fmt::Result { write!(f, "{:?}", self) } } impl DebugWithDb<dyn crate::Db + '_> for Diagnostic { fn fmt(&self, f: &mut std::fmt::Formatter<'_>, _db: &dyn crate::Db) -> std::fmt::Result { write!(f, "{:?}", self) } } #[salsa::tracked] pub struct Function { #[id] name: FunctionId, args: Vec<VariableId>, body: Expression, } #[salsa::accumulator] pub struct Diagnostics(Diagnostic); #[derive(Clone, Debug)] pub struct Diagnostic { pub position: usize, pub message: String, } }
Writing the unit test
Now that we have our DebugWithDb
impls in place, we can write a simple unit test harness.
The parse_string
function below creates a database, sets the source text, and then invokes the parser:
#![allow(unused)] fn main() { /// Create a new database with the given source text and parse the result. /// Returns the statements and the diagnostics generated. #[cfg(test)] fn parse_string(source_text: &str) -> String { use salsa::debug::DebugWithDb; // Create the database let mut db = crate::db::Database::default(); // Create the source program let source_program = SourceProgram::new(&mut db, source_text.to_string()); // Invoke the parser let statements = parse_statements(&db, source_program); // Read out any diagnostics let accumulated = parse_statements::accumulated::<Diagnostics>(&db, source_program); // Format the result as a string and return it format!("{:#?}", (statements, accumulated).debug(&db)) } }
Combined with the expect-test
crate, we can then write unit tests like this one:
#![allow(unused)] fn main() { #[test] fn parse_print() { let actual = parse_string("print 1 + 2"); let expected = expect_test::expect![[r#" ( [ ExpressionData::Op( Number( OrderedFloat( 1.0, ), ), Add, Number( OrderedFloat( 2.0, ), ), ), ], [], )"#]]; expected.assert_eq(&actual); } }
Defining the checker
Defining the interpreter
Reference
The "red-green" algorithm
This page explains the basic salsa incremental algorithm. The algorithm is called the "red-green" algorithm, which is where the name salsa comes from.
Database revisions
The salsa database always tracks a single revision. Each time you set an input, the revision is incremented. So we start in revision R1
, but when a set
method is called, we will go to R2
, then R3
, and so on. For each input, we also track the revision in which it was last changed.
Basic rule: when inputs change, re-execute!
When you invoke a tracked function, in addition to storing the value that was returned, we also track what other tracked functions it depends on, and the revisions when their value last changed. When you invoke the function again, if the database is in a new revision, then we check whether any of the inputs to this function have changed in that new revision. If not, we can just return our cached value. But if the inputs have changed (or may have changed), we will re-execute the function to find the most up-to-date answer.
Here is a simple example, where the parse_module
function invokes the module_text
function:
#![allow(unused)] fn main() { #[salsa::tracked] fn parse_module(db: &dyn Db, module: Module) -> Ast { let module_text: &String = module_text(db, module); Ast::parse_text(module_text) } #[salsa::tracked(ref)] fn module_text(db: &dyn Db, module: Module) -> String { panic!("text for module `{module:?}` not set") } }
If we invoke parse_module
twice, but change the module text in between, then we will have to re-execute parse_module
:
#![allow(unused)] fn main() { module_text::set( db, module, "fn foo() { }".to_string(), ); parse_module(db, module); // executes // ...some time later... module_text::set( db, module, "fn foo() { /* add a comment */ }".to_string(), ); parse_module(db, module); // executes again! }
Backdating: sometimes we can be smarter
Often, though, tracked functions don't depend directly on the inputs. Instead, they'll depend on some other tracked function. For example, perhaps we have a type_check
function that reads the AST:
#![allow(unused)] fn main() { #[salsa::tracked] fn type_check(db: &dyn Db, module: Module) { let ast = parse_module(db, module); ... } }
If the module text is changed, we saw that we have to re-execute parse_module
, but there are many changes to the source text that still produce the same AST. For example, suppose we simply add a comment? In that case, if type_check
is called again, we will:
- First re-execute
parse_module
, since its input changed. - We will then compare the resulting AST. If it's the same as last time, we can backdate the result, meaning that we say that, even though the inputs changed, the output didn't.
Durability: an optimization
As an optimization, salsa includes the concept of durability. When you set the value of a tracked function, you can also set it with a given durability:
#![allow(unused)] fn main() { module_text::set_with_durability( db, module, "fn foo() { }".to_string(), salsa::Durability::HIGH ); }
For each durability, we track the revision in which some input with that durability changed. If a tracked function depends (transitively) only on high durability inputs, and you change a low durability input, then we can very easily determine that the tracked function result is still valid, avoiding the need to traverse the input edges one by one.
An example: if compiling a Rust program, you might mark the inputs from crates.io as high durability inputs, since they are unlikely to change. The current workspace could be marked as low durability.
Common patterns
This section documents patterns for using Salsa.
Selection
The "selection" (or "firewall") pattern is when you have a query Qsel that reads from some other Qbase and extracts some small bit of information from Qbase that it returns. In particular, Qsel does not combine values from other queries. In some sense, then, Qsel is redundant -- you could have just extracted the information the information from Qbase yourself, and done without the salsa machinery. But Qsel serves a role in that it limits the amount of re-execution that is required when Qbase changes.
Example: the base query
For example, imagine that you have a query parse
that parses the input text of a request
and returns a ParsedResult
, which contains a header and a body:
#[derive(Clone, Debug, PartialEq, Eq)]
struct ParsedResult {
header: Vec<ParsedHeader>,
body: String,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct ParsedHeader {
key: String,
value: String,
}
#[salsa::query_group(Request)]
trait RequestParser {
/// The base text of the request.
#[salsa::input]
fn request_text(&self) -> String;
/// The parsed form of the request.
fn parse(&self) -> ParsedResult;
}
Example: a selecting query
And now you have a number of derived queries that only look at the header. For example, one might extract the "content-type' header:
#[salsa::query_group(Request)]
trait RequestUtil: RequestParser {
fn content_type(&self) -> Option<String>;
}
fn content_type(db: &dyn RequestUtil) -> Option<String> {
db.parse()
.header
.iter()
.find(|header| header.key == "content-type")
.map(|header| header.value.clone())
}
Why prefer a selecting query?
This content_type
query is an instance of the selection pattern. It only
"selects" a small bit of information from the ParsedResult
. You might not have
made it a query at all, but instead made it a method on ParsedResult
.
But using a query for content_type
has an advantage: now if there are downstream
queries that only depend on the content_type
(or perhaps on other headers extracted
via a similar pattern), those queries will not have to be re-executed when the request
changes unless the content-type header changes. Consider the dependency graph:
request_text --> parse --> content_type --> (other queries)
When the request_text
changes, we are always going to have to re-execute parse
.
If that produces a new parsed result, we are also going to re-execute content_type
.
But if the result of content_type
has not changed, then we will not re-execute
the other queries.
More levels of selection
In fact, in our example we might consider introducing another level of selection.
Instead of having content_type
directly access the results of parse
, it might be better
to insert a selecting query that just extracts the header:
#[salsa::query_group(Request)]
trait RequestUtil: RequestParser {
fn header(&self) -> Vec<ParsedHeader>;
fn content_type(&self) -> Option<String>;
}
fn header(db: &dyn RequestUtil) -> Vec<ParsedHeader> {
db.parse().header
}
fn content_type(db: &dyn RequestUtil) -> Option<String> {
db.header()
.iter()
.find(|header| header.key == "content-type")
.map(|header| header.value.clone())
}
This will result in a dependency graph like so:
request_text --> parse --> header --> content_type --> (other queries)
The advantage of this is that changes that only effect the "body" or
only consume small parts of the request will
not require us to re-execute content_type
at all. This would be particularly
valuable if there are a lot of dependent headers.
A note on cloning and efficiency
In this example, we used common Rust types like Vec
and String
,
and we cloned them quite frequently. This will work just fine in Salsa,
but it may not be the most efficient choice. This is because each clone
is going to produce a deep copy of the result. As a simple fix, you
might convert your data structures to use Arc
(e.g., Arc<Vec<ParsedHeader>>
),
which makes cloning cheap.
On-Demand (Lazy) Inputs
Salsa input queries work best if you can easily provide all of the inputs upfront. However sometimes the set of inputs is not known beforehand.
A typical example is reading files from disk. While it is possible to eagerly scan a particular directory and create an in-memory file tree in a salsa input query, a more straight-forward approach is to read the files lazily. That is, when someone requests the text of a file for the first time:
- Read the file from disk and cache it.
- Setup a file-system watcher for this path.
- Invalidate the cached file once the watcher sends a change notification.
This is possible to achieve in salsa, using a derived query and report_synthetic_read
and invalidate
queries.
The setup looks roughly like this:
#[salsa::query_group(VfsDatabaseStorage)]
trait VfsDatabase: salsa::Database + FileWatcher {
fn read(&self, path: PathBuf) -> String;
}
trait FileWatcher {
fn watch(&self, path: &Path);
fn did_change_file(&mut self, path: &Path);
}
fn read(db: &dyn VfsDatabase, path: PathBuf) -> String {
db.salsa_runtime()
.report_synthetic_read(salsa::Durability::LOW);
db.watch(&path);
std::fs::read_to_string(&path).unwrap_or_default()
}
#[salsa::database(VfsDatabaseStorage)]
struct MyDatabase { ... }
impl FileWatcher for MyDatabase {
fn watch(&self, path: &Path) { ... }
fn did_change_file(&mut self, path: &Path) {
ReadQuery.in_db_mut(self).invalidate(path);
}
}
- We declare the query as a derived query (which is the default).
- In the query implementation, we don't call any other query and just directly read file from disk.
- Because the query doesn't read any inputs, it will be assigned a
HIGH
durability by default, which we override withreport_synthetic_read
. - The result of the query is cached, and we must call
invalidate
to clear this cache.
A complete, runnable file-watching example can be found in this git repo along with a write-up that explains more about the code and what it is doing.
Tuning Salsa
LRU Cache
You can specify an LRU cache size for any non-input query:
let lru_capacity: usize = 128;
base_db::ParseQuery.in_db_mut(self).set_lru_capacity(lru_capacity);
The default is 0
, which disables LRU-caching entirely.
See The LRU RFC for more details.
Note that there is no garbage collection for keys and results of old queries, so LRU caches are currently the only knob available for avoiding unbounded memory usage for long-running apps built on Salsa.
Intern Queries
Intern queries can make key lookup cheaper, save memory, and
avoid the need for Arc
.
Interning is especially useful for queries that involve nested, tree-like data structures.
See:
- The Intern Queries RFC
- The
compiler
example, which uses interning.
Granularity of Incrementality
See:
Cancellation
Queries that are no longer needed due to concurrent writes or changes in dependencies are cancelled by Salsa. Each accesss of an intermediate query is a potential cancellation point. cancellation is implemented via panicking, and Salsa internals are intended to be panic-safe.
If you have a query that contains a long loop which does not execute any intermediate queries,
salsa won't be able to cancel it automatically. You may wish to check for cancellation yourself
by invoking db.unwind_if_cancelled()
.
For more details on cancellation, see:
- the Opinionated cancellation RFC.
- The tests for cancellation behavior in the Salsa repo.
Cycle handling
By default, when Salsa detects a cycle in the computation graph, Salsa will panic with a salsa::Cycle
as the panic value. The salsa::Cycle
structure that describes the cycle, which can be useful for diagnosing what went wrong.
Recovering via fallback
Panicking when a cycle occurs is ok for situations where you believe a cycle is impossible. But sometimes cycles can result from illegal user input and cannot be statically prevented. In these cases, you might prefer to gracefully recover from a cycle rather than panicking the entire query. Salsa supports that with the idea of cycle recovery.
To use cycle recovery, you annotate potential participants in the cycle with a #[salsa::recover(my_recover_fn)]
attribute. When a cycle occurs, if any participant P has recovery information, then no panic occurs. Instead, the execution of P is aborted and P will execute the recovery function to generate its result. Participants in the cycle that do not have recovery information continue executing as normal, using this recovery result.
The recovery function has a similar signature to a query function. It is given a reference to your database along with a salsa::Cycle
describing the cycle that occurred; it returns the result of the query. Example:
#![allow(unused)] fn main() { fn my_recover_fn( db: &dyn MyDatabase, cycle: &salsa::Cycle, ) -> MyResultValue }
The db
and cycle
argument can be used to prepare a useful error message for your users.
Important: Although the recovery function is given a db
handle, you should be careful to avoid creating a cycle from within recovery or invoking queries that may be participating in the current cycle. Attempting to do so can result in inconsistent results.
Figuring out why recovery did not work
If a cycle occurs and some of the participant queries have #[salsa::recover]
annotations and others do not, then the query will be treated as irrecoverable and will simply panic. You can use the Cycle::unexpected_participants
method to figure out why recovery did not succeed and add the appropriate #[salsa::recover]
annotations.
How Salsa works
Video available
To get the most complete introduction to Salsa's inner works, check out the "How Salsa Works" video. If you'd like a deeper dive, the "Salsa in more depth" video digs into the details of the incremental algorithm.
If you're in China, watch videos on "How Salsa Works", "Salsa In More Depth".
Key idea
The key idea of salsa
is that you define your program as a set of
queries. Every query is used like function K -> V
that maps from
some key of type K
to a value of type V
. Queries come in two basic
varieties:
- Inputs: the base inputs to your system. You can change these whenever you like.
- Functions: pure functions (no side effects) that transform your inputs into other values. The results of queries is memoized to avoid recomputing them a lot. When you make changes to the inputs, we'll figure out (fairly intelligently) when we can re-use these memoized values and when we have to recompute them.
How to use Salsa in three easy steps
Using salsa is as easy as 1, 2, 3...
- Define one or more query groups that contain the inputs and queries you will need. We'll start with one such group, but later on you can use more than one to break up your system into components (or spread your code across crates).
- Define the query functions where appropriate.
- Define the database, which contains the storage for all the inputs/queries you will be using. The query struct will contain the storage for all of the inputs/queries and may also contain anything else that your code needs (e.g., configuration data).
To see an example of this in action, check out the hello_world
example, which has a number of comments explaining how
things work.
Digging into the plumbing
Check out the plumbing chapter to see a deeper explanation of the code that salsa generates and how it connects to the salsa library.
Videos
There are currently two videos about Salsa available, but they describe an older version of Salsa and so they are rather outdated:
- How Salsa Works, which gives a high-level introduction to the key concepts involved and shows how to use salsa;
- Salsa In More Depth, which digs into the incremental algorithm and explains -- at a high-level -- how Salsa is implemented.
If you're in China, watch videos on How Salsa Works, Salsa In More Depth.
Plumbing
⚠️ IN-PROGRESS VERSION OF SALSA. ⚠️
This page describes the unreleased "Salsa 2022" version, which is a major departure from older versions of salsa. The code here works but is only available on github and from the
salsa-2022
crate.
This chapter documents the code that salsa generates and its "inner workings". We refer to this as the "plumbing".
Overview
The plumbing section is broken up into chapters:
- The jars and ingredients covers how each salsa item (like a tracked function) specifies what data it needs and runtime, and how links between items work.
- The database and runtime covers the data structures that are used at runtime to coordinate workers, trigger cancellation, track which functions are active and what dependencies they have accrued, and so forth.
- The query operations chapter describes how the major operations on function ingredients work. This text was written for an older version of salsa but the logic is the same:
- The maybe changed after operation determines when a memoized value for a tracked function is out of date.
- The fetch operation computes the most recent value.
- The derived queries flowchart depicts the logic in flowchart form.
- The cycle handling handling chapter describes what happens when cycles occur.
- The terminology section describes various words that appear throughout.
Jars and ingredients
⚠️ IN-PROGRESS VERSION OF SALSA. ⚠️
This page describes the unreleased "Salsa 2022" version, which is a major departure from older versions of salsa. The code here works but is only available on github and from the
salsa-2022
crate.
This page covers how data is organized in salsa and how links between salsa items (e.g., dependency tracking) works.
Salsa items and ingredients
A salsa item is some item annotated with a salsa annotation that can be included in a jar. For example, a tracked function is a salsa item:
#![allow(unused)] fn main() { #[salsa::tracked] fn foo(db: &dyn Db, input: MyInput) { } }
...and so is a salsa input...
#![allow(unused)] fn main() { #[salsa::input] struct MyInput { } }
...or a tracked struct:
#![allow(unused)] fn main() { #[salsa::tracked] struct MyStruct { } }
Each salsa item needs certain bits of data at runtime to operate.
These bits of data are called ingredients.
Most salsa items generate a single ingredient, but sometimes they make more than one.
For example, a tracked function generates a FunctionIngredient
.
A tracked struct however generates several ingredients, one for the struct itself (a TrackedStructIngredient
,
and one FunctionIngredient
for each value field.
Ingredients define the core logic of salsa
Most of the interesting salsa code lives in these ingredients.
For example, when you create a new tracked struct, the method TrackedStruct::new_struct
is invoked;
it is responsible for determining the tracked struct's id.
Similarly, when you call a tracked function, that is translated into a call to TrackedFunction::fetch
,
which decides whether there is a valid memoized value to return,
or whether the function must be executed.
Ingredient interfaces are not stable or subject to semver
Interfaces are not meant to be directly used by salsa users. The salsa macros generate code that invokes the ingredients. The APIs may change in arbitrary ways across salsa versions, as the macros are kept in sync.
The Ingredient
trait
Each ingredient implements the Ingredient<DB>
trait, which defines generic operations supported by any kind of ingredient.
For example, the method maybe_changed_after
can be used to check whether some particular piece of data stored in the ingredient may have changed since a given revision:
We'll see below that each database DB
is able to take an IngredientIndex
and use that to get a &dyn Ingredient<DB>
for the corresponding ingredient.
This allows the database to perform generic operations on a numbered ingredient without knowing exactly what the type of that ingredient is.
Jars are a collection of ingredients
When you declare a salsa jar, you list out each of the salsa items that are included in that jar:
#[salsa::jar]
struct Jar(
foo,
MyInput,
MyStruct
);
This expands to a struct like so:
#![allow(unused)] fn main() { struct Jar( <foo as IngredientsFor>::Ingredient, <MyInput as IngredientsFor>::Ingredient, <MyStruct as IngredientsFor>::Ingredient, ) }
The IngredientsFor
trait is used to define the ingredients needed by some salsa item, such as the tracked function foo
or the tracked struct MyInput
.
Each salsa item defines a type I
, so that <I as IngredientsFor>::Ingredient
gives the ingredients needed by I
.
Database is a tuple of jars
Salsa's database storage ultimately boils down to a tuple of jar structs, where each jar struct (as we just saw) itself contains the ingredients for the salsa items within that jar. The database can thus be thought of as a list of ingredients, although that list is organized into a 2-level hierarchy.
The reason for this 2-level hierarchy is that it permits separate compilation and privacy. The crate that lists the jars doens't have to know the contents of the jar to embed the jar struct in the database. And some of the types that appear in the jar may be private to another struct.
The HasJars trait and the Jars type
Each salsa database implements the HasJars
trait,
generated by the salsa::db
procedural macro.
The HarJars
trait, among other things, defines a Jars
associated type that maps to a tuple of the jars in the trait.
For example, given a database like this...
#[salsa::db(Jar1, ..., JarN)]
struct MyDatabase {
storage: salsa::Storage<Self>
}
...the salsa::db
macro would generate a HasJars
impl that (among other things) contains type Jars = (Jar1, ..., JarN)
:
impl salsa::storage::HasJars for #db {
type Jars = (#(#jar_paths,)*);
In turn, the salsa::Storage<DB>
type ultimately contains a struct Shared
that embeds DB::Jars
, thus embedding all the data for each jar.
Ingredient indices
During initialization, each ingredient in the database is assigned a unique index called the IngredientIndex
.
This is a 32-bit number that identifies a particular ingredient from a particular jar.
Routes
In addition to an index, each ingredient in the database also has a corresponding route.
A route is a closure that, given a reference to the DB::Jars
tuple,
returns a &dyn Ingredient<DB>
reference.
The route table allows us to go from the IngredientIndex
for a particular ingredient
to its &dyn Ingredient<DB>
trait object.
The route table is created while the database is being initialized,
as described shortly.
Database keys and dependency keys
A DatabaseKeyIndex
identifies a specific value stored in some specific ingredient.
It combines an IngredientIndex
with a key_index
, which is a salsa::Id
:
/// An "active" database key index represents a database key index
/// that is actively executing. In that case, the `key_index` cannot be
/// None.
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)]
pub struct DatabaseKeyIndex {
pub(crate) ingredient_index: IngredientIndex,
pub(crate) key_index: Id,
}
A DependencyIndex
is similar, but the key_index
is optional.
This is used when we sometimes wish to refer to the ingredient as a whole, and not any specific value within the ingredient.
These kinds of indices are used to store connetions between ingredients. For example, each memoized value has to track its inputs. Those inputs are stored as dependency indices. We can then do things like ask, "did this input change since revision R?" by
- using the ingredient index to find the route and get a
&dyn Ingredient<DB>
- and then invoking the
maybe_changed_since
method on that trait object.
HasJarsDyn
There is one catch in the above setup.
We need the database to be dyn-safe, and we also need to be able to define the database trait and so forth without knowing the final database type to enable separate compilation.
Traits like Ingredient<DB>
require knowing the full DB
type.
If we had one function ingredient directly invoke a method on Ingredient<DB>
, that would imply that it has to be fully generic and only instantiated at the final crate, when the full database type is available.
We solve this via the HasJarsDyn
trait. The HasJarsDyn
trait exports method that combine the "find ingredient, invoking method" steps into one method:
/// Dyn friendly subset of HasJars
pub trait HasJarsDyn {
fn runtime(&self) -> &Runtime;
fn maybe_changed_after(&self, input: DependencyIndex, revision: Revision) -> bool;
fn cycle_recovery_strategy(&self, input: IngredientIndex) -> CycleRecoveryStrategy;
fn origin(&self, input: DatabaseKeyIndex) -> Option<QueryOrigin>;
fn mark_validated_output(&self, executor: DatabaseKeyIndex, output: DependencyIndex);
/// Invoked when `executor` used to output `stale_output` but no longer does.
/// This method routes that into a call to the [`remove_stale_output`](`crate::ingredient::Ingredient::remove_stale_output`)
/// method on the ingredient for `stale_output`.
fn remove_stale_output(&self, executor: DatabaseKeyIndex, stale_output: DependencyIndex);
/// Informs `ingredient` that the salsa struct with id `id` has been deleted.
/// This means that `id` will not be used in this revision and hence
/// any memoized values keyed by that struct can be discarded.
///
/// In order to receive this callback, `ingredient` must have registered itself
/// as a dependent function using
/// [`SalsaStructInDb::register_dependent_fn`](`crate::salsa_struct::SalsaStructInDb::register_dependent_fn`).
fn salsa_struct_deleted(&self, ingredient: IngredientIndex, id: Id);
}
So, technically, to check if an input has changed, an ingredient:
- Invokes
HasJarsDyn::maybe_changed_after
on thedyn Database
- The impl for this method (generated by
#[salsa::db]
):- gets the route for the ingredient from the ingredient index
- uses the route to get a
&dyn Ingredient
- invokes
maybe_changed_after
on that ingredient
Initializing the database
The last thing to dicsuss is how the database is initialized.
The Default
implementation for Storage<DB>
does the work:
impl<DB> Default for Storage<DB>
where
DB: HasJars,
{
fn default() -> Self {
let mut routes = Routes::new();
let jars = DB::create_jars(&mut routes);
Self {
shared: Arc::new(Shared {
jars,
cvar: Default::default(),
}),
routes: Arc::new(routes),
runtime: Runtime::default(),
}
}
}
First, it creates an empty Routes
instance.
Then it invokes the DB::create_jars
method.
The implementation of this method is defined by the #[salsa::db]
macro; it simply invokes the Jar::create_jar
method on each of the jars:
fn create_jars(routes: &mut salsa::routes::Routes<Self>) -> Self::Jars {
(
(
<#jar_paths as salsa::jar::Jar>::create_jar(routes),
)*
)
}
This implementation for create_jar
is geneated by the #[salsa::jar]
macro, and simply walks over the representative type for each salsa item and ask it to create its ingredients
quote! {
impl<'salsa_db> salsa::jar::Jar<'salsa_db> for #jar_struct {
type DynDb = dyn #jar_trait + 'salsa_db;
fn create_jar<DB>(routes: &mut salsa::routes::Routes<DB>) -> Self
where
DB: salsa::storage::JarFromJars<Self> + salsa::storage::DbWithJar<Self>,
{
(
let #field_var_names = <#field_tys as salsa::storage::IngredientsFor>::create_ingredients(routes);
)*
Self(#(#field_var_names),*)
}
}
}
The code to create the ingredients for any particular item is generated by their associated macros (e.g., #[salsa::tracked]
, #[salsa::input]
), but it always follows a particular structure.
To create an ingredient, we first invoke Routes::push
which creates the routes to that ingredient and assigns it an IngredientIndex
.
We can then invoke (e.g.) FunctionIngredient::new
to create the structure.
The routes to an ingredient are defined as closures that, given the DB::Jars
, can find the data for a particular ingredient.
Database and runtime
A salsa database struct is declared by the user with the #[salsa::db]
annotation.
It contains all the data that the program needs to execute:
#[salsa::db(jar0...jarn)]
struct MyDatabase {
storage: Storage<Self>,
maybe_other_fields: u32,
}
This data is divided into two categories:
- Salsa-governed storage, contained in the
Storage<Self>
field. This data is mandatory. - Other fields (like
maybe_other_fields
) defined by the user. This can be anything. This allows for you to give access to special resources or whatever.
Parallel handles
When used across parallel threads, the database type defined by the user must support a "snapshot" operation.
This snapshot should create a clone of the database that can be used by the parallel threads.
The Storage
operation itself supports snapshot
.
The Snapshot
method returns a Snapshot<DB>
type, which prevents these clones from being accessed via an &mut
reference.
The Storage struct
The salsa Storage
struct contains all the data that salsa itself will use and work with.
There are three key bits of data:
- The
Shared
struct, which contains the data stored across all snapshots. This is primarily the ingredients described in the jars and ingredients chapter, but it also contains some synchronization information (a cond var). This is used for cancellation, as described below.- The data in the
Shared
struct is only shared across threads when other threads are active. Some operations, like mutating an input, require an&mut
handle to theShared
struct. This is obtained by using theArc::get_mut
methods; obviously this is only possible when all snapshots and threads have ceased executing, since there must be a single handle to theArc
.
- The data in the
- The
Routes
struct, which contains the information to find any particular ingredient -- this is also shared across all handles, and its construction is also described in the jars and ingredients chapter. The routes are separated out from theShared
struct because they are truly immutable at all times, and we want to be able to hold a handle to them while getting&mut
access to theShared
struct. - The
Runtime
struct, which is specific to a particular database instance. It contains the data for a single active thread, along with some links to shraed data of its own.
Incrementing the revision counter and getting mutable access to the jars
Salsa's general model is that there is a single "master" copy of the database and, potentially, multiple snapshots.
The snapshots are not directly owned, they are instead enclosed in a Snapshot<DB>
type that permits only &
-deref,
and so the only database that can be accessed with an &mut
-ref is the master database.
Each of the snapshots however onlys another handle on the Arc
in Storage
that stores the ingredients.
Whenever the user attempts to do an &mut
-operation, such as modifying an input field, that needs to
first cancel any parallel snapshots and wait for those parallel threads to finish.
Once the snapshots have completed, we can use Arc::get_mut
to get an &mut
reference to the ingredient data.
This allows us to get &mut
access without any unsafe code and
guarantees that we have successfully managed to cancel the other worker threads
(or gotten ourselves into a deadlock).
The code to acquire &mut
access to the database is the jars_mut
method:
#![allow(unused)] fn main() { /// Gets mutable access to the jars. This will trigger a new revision /// and it will also cancel any ongoing work in the current revision. /// Any actual writes that occur to data in a jar should use /// [`Runtime::report_tracked_write`]. pub fn jars_mut(&mut self) -> (&mut DB::Jars, &mut Runtime) { // Wait for all snapshots to be dropped. self.cancel_other_workers(); // Increment revision counter. self.runtime.new_revision(); // Acquire `&mut` access to `self.shared` -- this is only possible because // the snapshots have all been dropped, so we hold the only handle to the `Arc`. let shared = Arc::get_mut(&mut self.shared).unwrap(); // Inform other ingredients that a new revision has begun. // This gives them a chance to free resources that were being held until the next revision. let routes = self.routes.clone(); for route in routes.reset_routes() { route(&mut shared.jars).reset_for_new_revision(); } // Return mut ref to jars + runtime. (&mut shared.jars, &mut self.runtime) } }
The key initial point is that it invokes cancel_other_workers
before proceeding:
#![allow(unused)] fn main() { /// Sets cancellation flag and blocks until all other workers with access /// to this storage have completed. /// /// This could deadlock if there is a single worker with two handles to the /// same database! fn cancel_other_workers(&mut self) { loop { self.runtime.set_cancellation_flag(); // If we have unique access to the jars, we are done. if Arc::get_mut(&mut self.shared).is_some() { return; } // Otherwise, wait until some other storage entites have dropped. // We create a mutex here because the cvar api requires it, but we // don't really need one as the data being protected is actually // the jars above. // // The cvar `self.shared.cvar` is notified by the `Drop` impl. let mutex = parking_lot::Mutex::new(()); let mut guard = mutex.lock(); self.shared.cvar.wait(&mut guard); } } }
The Salsa runtime
The salsa runtime offers helper methods that are accessed by the ingredients.
It tracks, for example, the active query stack, and contains methods for adding dependencies between queries (e.g., report_tracked_read
) or resolving cycles.
It also tracks the current revision and information about when values with low or high durability last changed.
Basically, the ingredient structures store the "data at rest" -- like memoized values -- and things that are "per ingredient".
The runtime stores the "active, in-progress" data, such as which queries are on the stack, and/or the dependencies accessed by the currently active query.
Query operations
Each of the query storage struct implements the QueryStorageOps
trait found in the plumbing
module:
pub trait QueryStorageOps<Q>
where
Self: QueryStorageMassOps,
Q: Query,
{
which defines the basic operations that all queries support. The most important are these two:
- maybe changed after: Returns true if the value of the query (for the given key) may have changed since the given revision.
- Fetch: Returms the up-to-date value for the given K (or an error in the case of an "unrecovered" cycle).
Maybe changed after
/// True if the value of `input`, which must be from this query, may have
/// changed after the given revision ended.
///
/// This function should only be invoked with a revision less than the current
/// revision.
fn maybe_changed_after(
&self,
db: &<Q as QueryDb<'_>>::DynDb,
input: DatabaseKeyIndex,
revision: Revision,
) -> bool;
The maybe_changed_after
operation computes whether a query's value may have changed after the given revision. In other words, Q.maybe_change_since(R)
is true if the value of the query Q
may have changed in the revisions (R+1)..R_now
, where R_now
is the current revision. Note that it doesn't make sense to ask maybe_changed_after(R_now)
.
Input queries
Input queries are set explicitly by the user. maybe_changed_after
can therefore just check when the value was last set and compare.
Interned queries
Derived queries
The logic for derived queries is more complex. We summarize the high-level ideas here, but you may find the flowchart useful to dig deeper. The terminology section may also be useful; in some cases, we link to that section on the first usage of a word.
- If an existing memo is found, then we check if the memo was verified in the current revision. If so, we can compare its changed at revision and return true or false appropriately.
- Otherwise, we must check whether dependencies have been modified:
- Let R be the revision in which the memo was last verified; we wish to know if any of the dependencies have changed since revision R.
- First, we check the durability. For each memo, we track the minimum durability of the memo's dependencies. If the memo has durability D, and there have been no changes to an input with durability D since the last time the memo was verified, then we can consider the memo verified without any further work.
- If the durability check is not sufficient, then we must check the dependencies individually. For this, we iterate over each dependency D and invoke the maybe changed after operation to check whether D has changed since the revision R.
- If no dependency was modified:
- We can mark the memo as verified and use its changed at revision to return true or false.
- Assuming dependencies have been modified:
- Then we execute the user's query function (same as in fetch), which potentially backdates the resulting value.
- Compare the changed at revision in the resulting memo and return true or false.
Fetch
/// Execute the query, returning the result (often, the result
/// will be memoized). This is the "main method" for
/// queries.
///
/// Returns `Err` in the event of a cycle, meaning that computing
/// the value for this `key` is recursively attempting to fetch
/// itself.
fn fetch(&self, db: &<Q as QueryDb<'_>>::DynDb, key: &Q::Key) -> Q::Value;
The fetch
operation computes the value of a query. It prefers to reuse memoized values when it can.
Input queries
Input queries simply load the result from the table.
Interned queries
Interned queries map the input into a hashmap to find an existing integer. If none is present, a new value is created.
Derived queries
The logic for derived queries is more complex. We summarize the high-level ideas here, but you may find the flowchart useful to dig deeper. The terminology section may also be useful; in some cases, we link to that section on the first usage of a word.
- If an existing memo is found, then we check if the memo was verified in the current revision. If so, we can directly return the memoized value.
- Otherwise, if the memo contains a memoized value, we must check whether dependencies have been modified:
- Let R be the revision in which the memo was last verified; we wish to know if any of the dependencies have changed since revision R.
- First, we check the durability. For each memo, we track the minimum durability of the memo's dependencies. If the memo has durability D, and there have been no changes to an input with durability D since the last time the memo was verified, then we can consider the memo verified without any further work.
- If the durability check is not sufficient, then we must check the dependencies individually. For this, we iterate over each dependency D and invoke the maybe changed after operation to check whether D has changed since the revision R.
- If no dependency was modified:
- We can mark the memo as verified and return its memoized value.
- Assuming dependencies have been modified or the memo does not contain a memoized value:
- Then we execute the user's query function.
- Next, we compute the revision in which the memoized value last changed:
- Backdate: If there was a previous memoized value, and the new value is equal to that old value, then we can backdate the memo, which means to use the 'changed at' revision from before.
- Thanks to backdating, it is possible for a dependency of the query to have changed in some revision R1 but for the output of the query to have changed in some revision R2 where R2 predates R1.
- Otherwise, we use the current revision.
- Backdate: If there was a previous memoized value, and the new value is equal to that old value, then we can backdate the memo, which means to use the 'changed at' revision from before.
- Construct a memo for the new value and return it.
Derived queries flowchart
Derived queries are by far the most complex. This flowchart documents the flow of the maybe changed after and fetch operations. This flowchart can be edited on draw.io:
Cycles
Cross-thread blocking
The interface for blocking across threads now works as follows:
- When one thread
T1
wishes to block on a queryQ
being executed by another threadT2
, it invokesRuntime::try_block_on
. This will check for cycles. Assuming no cycle is detected, it will blockT1
untilT2
has completed withQ
. At that point,T1
reawakens. However, we don't know the result of executingQ
, soT1
now has to "retry". Typically, this will result in successfully reading the cached value. - While
T1
is blocking, the runtime moves its query stack (aVec
) into the shared dependency graph data structure. WhenT1
reawakens, it recovers ownership of its query stack before returning fromtry_block_on
.
Cycle detection
When a thread T1
attempts to execute a query Q
, it will try to load the value for Q
from the memoization tables. If it finds an InProgress
marker, that indicates that Q
is currently being computed. This indicates a potential cycle. T1
will then try to block on the query Q
:
- If
Q
is also being computed byT1
, then there is a cycle. - Otherwise, if
Q
is being computed by some other threadT2
, we have to check whetherT2
is (transitively) blocked onT1
. If so, there is a cycle.
These two cases are handled internally by the Runtime::try_block_on
function. Detecting the intra-thread cycle case is easy; to detect cross-thread cycles, the runtime maintains a dependency DAG between threads (identified by RuntimeId
). Before adding an edge T1 -> T2
(i.e., T1
is blocked waiting for T2
) into the DAG, it checks whether a path exists from T2
to T1
. If so, we have a cycle and the edge cannot be added (then the DAG would not longer be acyclic).
When a cycle is detected, the current thread T1
has full access to the query stacks that are participating in the cycle. Consider: naturally, T1
has access to its own stack. There is also a path T2 -> ... -> Tn -> T1
of blocked threads. Each of the blocked threads T2 ..= Tn
will have moved their query stacks into the dependency graph, so those query stacks are available for inspection.
Using the available stacks, we can create a list of cycle participants Q0 ... Qn
and store that into a Cycle
struct. If none of the participants Q0 ... Qn
have cycle recovery enabled, we panic with the Cycle
struct, which will trigger all the queries on this thread to panic.
Cycle recovery via fallback
If any of the cycle participants Q0 ... Qn
has cycle recovery set, we recover from the cycle. To help explain how this works, we will use this example cycle which contains three threads. Beginning with the current query, the cycle participants are QA3
, QB2
, QB3
, QC2
, QC3
, and QA2
.
The cyclic
edge we have
failed to add.
:
A : B C
:
QA1 v QB1 QC1
┌► QA2 ┌──► QB2 ┌─► QC2
│ QA3 ───┘ QB3 ──┘ QC3 ───┐
│ │
└───────────────────────────────┘
Recovery works in phases:
- Analyze: As we enumerate the query participants, we collect their collective inputs (all queries invoked so far by any cycle participant) and the max changed-at and min duration. We then remove the cycle participants themselves from this list of inputs, leaving only the queries external to the cycle.
- Mark: For each query Q that is annotated with
#[salsa::recover]
, we mark it and all of its successors on the same thread by setting itscycle
flag to thec: Cycle
we constructed earlier; we also reset its inputs to the collective inputs gathering during analysis. If those queries resume execution later, those marks will trigger them to immediately unwind and use cycle recovery, and the inputs will be used as the inputs to the recovery value.- Note that we mark all the successors of Q on the same thread, whether or not they have recovery set. We'll discuss later how this is important in the case where the active thread (A, here) doesn't have any recovery set.
- Unblock: Each blocked thread T that has a recovering query is forcibly reawoken; the outgoing edge from that thread to its successor in the cycle is removed. Its condvar is signalled with a
WaitResult::Cycle(c)
. When the thread reawakens, it will see that and start unwinding with the cyclec
. - Handle the current thread: Finally, we have to choose how to have the current thread proceed. If the current thread includes any cycle with recovery information, then we can begin unwinding. Otherwise, the current thread simply continues as if there had been no cycle, and so the cyclic edge is added to the graph and the current thread blocks. This is possible because some other thread had recovery information and therefore has been awoken.
Let's walk through the process with a few examples.
Example 1: Recovery on the detecting thread
Consider the case where only the query QA2 has recovery set. It and QA3 will be marked with their cycle
flag set to c: Cycle
. Threads B and C will not be unblocked, as they do not have any cycle recovery nodes. The current thread (Thread A) will initiate unwinding with the cycle c
as the value. Unwinding will pass through QA3 and be caught by QA2. QA2 will substitute the recovery value and return normally. QA1 and QC3 will then complete normally and so forth, on up until all queries have completed.
Example 2: Recovery in two queries on the detecting thread
Consider the case where both query QA2 and QA3 have recovery set. It proceeds the same Example 1 until the the current initiates unwinding, as described in Example 1. When QA3 receives the cycle, it stores its recovery value and completes normally. QA2 then adds QA3 as an input dependency: at that point, QA2 observes that it too has the cycle mark set, and so it initiates unwinding. The rest of QA2 therefore never executes. This unwinding is caught by QA2's entry point and it stores the recovery value and returns normally. QA1 and QC3 then continue normally, as they have not had their cycle
flag set.
Example 3: Recovery on another thread
Now consider the case where only the query QB2 has recovery set. It and QB3 will be marked with the cycle c: Cycle
and thread B will be unblocked; the edge QB3 -> QC2
will be removed from the dependency graph. Thread A will then add an edge QA3 -> QB2
and block on thread B. At that point, thread A releases the lock on the dependency graph, and so thread B is re-awoken. It observes the WaitResult::Cycle
and initiates unwinding. Unwinding proceeds through QB3 and into QB2, which recovers. QB1 is then able to execute normally, as is QA3, and execution proceeds from there.
Example 4: Recovery on all queries
Now consider the case where all the queries have recovery set. In that case, they are all marked with the cycle, and all the cross-thread edges are removed from the graph. Each thread will independently awaken and initiate unwinding. Each query will recover.
Terminology
Backdate
Backdating is when we mark a value that was computed in revision R as having last changed in some earlier revision. This is done when we have an older memo M and we can compare the two values to see that, while the dependencies to M may have changed, the result of the query function did not.
Changed at
The changed at revision for a memo is the revision in which that memo's value last changed. Typically, this is the same as the revision in which the query function was last executed, but it may be an earlier revision if the memo was backdated.
Dependency
A dependency of a query Q is some other query Q1 that was invoked as part of computing the value for Q (typically, invoking by Q's query function).
Derived query
A derived query is a query whose value is defined by the result of a user-provided query function. That function is executed to get the result of the query. Unlike input queries, the result of a derived queries can always be recomputed whenever needed simply by re-executing the function.
Durability
Durability is an optimization that we use to avoid checking the dependencies of a query individually. It was introduced in RFC #5.
Input query
An input query is a query whose value is explicitly set by the user. When that value is set, a durability can also be provided.
Ingredient
An ingredient is an individual piece of storage used to create a salsa item See the jars and ingredients chapter for more details.
LRU
the set_lru_capacity
method can be used to fix the maximum capacity for a query at a specific number of values. If more values are added after that point, then salsa will drop the values from older memos to conserve memory (we always retain the dependency information for those memos, however, so that we can still compute whether values may have changed, even if we don't know what that value is). The LRU mechanism was introduced in RFC #4.
Memo
A memo stores information about the last time that a query function for some query Q was executed:
- Typically, it contains the value that was returned from that function, so that we don't have to execute it again.
- However, this is not always true: some queries don't cache their result values, and values can also be dropped as a result of LRU collection. In those cases, the memo just stores dependency information, which can still be useful to determine if other queries that have Q as a dependency may have changed.
- The revision in which the memo last verified.
- The changed at revision in which the memo's value last changed. (Note that it may be backdated.)
- The minimum durability of the memo's dependencies.
- The complete set of dependencies, if available, or a marker that the memo has an untracked dependency.
Query
Query function
The query function is the user-provided function that we execute to compute the value of a derived query. Salsa assumed that all query functions are a 'pure' function of their dependencies unless the user reports an untracked read. Salsa always assumes that functions have no important side-effects (i.e., that they don't send messages over the network whose results you wish to observe) and thus that it doesn't have to re-execute functions unless it needs their return value.
Revision
A revision is a monotonically increasing integer that we use to track the "version" of the database. Each time the value of an input query is modified, we create a new revision.
Salsa item
A salsa item is something that is decorated with a #[salsa::foo]
macro, like a tracked function or struct.
See the jars and ingredients chapter for more details.
Salsa struct
A salsa struct is a struct decorated with one of the salsa macros:
#[salsa::tracked]
#[salsa::input]
#[salsa::interned]
See the salsa overview for more details.
Untracked dependency
An untracked dependency is an indication that the result of a derived query depends on something not visible to the salsa database. Untracked dependencies are created by invoking report_untracked_read
or report_synthetic_read
. When an untracked dependency is present, derived queries are always re-executed if the durability check fails (see the description of the fetch operation for more details).
Verified
A memo is verified in a revision R if we have checked that its value is still up-to-date (i.e., if we were to reexecute the query function, we are guaranteed to get the same result). Each memo tracks the revision in which it was last verified to avoid repeatedly checking whether dependencies have changed during the fetch and maybe changed after operations.
RFCs
The Salsa RFC process is used to describe the motivations for major changes made to Salsa. RFCs are recorded here in the Salsa book as a historical record of the considerations that were raised at the time. Note that the contents of RFCs, once merged, is typically not updated to match further changes. Instead, the rest of the book is updated to include the RFC text and then kept up to date as more PRs land and so forth.
Creating an RFC
If you'd like to propose a major new Salsa feature, simply clone the repository and create a new chapter under the list of RFCs based on the RFC template. Then open a PR with a subject line that starts with "RFC:".
RFC vs Implementation
The RFC can be in its own PR, or it can also includ work on the implementation together, whatever works best for you.
Does my change need an RFC?
Not all PRs require RFCs. RFCs are only needed for larger features or major changes to how Salsa works. And they don't have to be super complicated, but they should capture the most important reasons you would like to make the change. When in doubt, it's ok to just open a PR, and we can always request an RFC if we want one.
Description/title
Metadata
- Author: (Github username(s) or real names, as you prefer)
- Date: (today's date)
- Introduced in: https://github.com/salsa-rs/salsa/pull/1 (please update once you open your PR)
Summary
Summarize the effects of the RFC bullet point form.
Motivation
Say something about your goals here.
User's guide
Describe effects on end users here.
Reference guide
Describe implementation details or other things here.
Frequently asked questions
Use this section to add in design notes, downsides, rejected approaches, or other considerations.
Query group traits
Metadata
- Author: nikomatsakis
- Date: 2019-01-15
- Introduced in: https://github.com/salsa-rs/salsa-rfcs/pull/1
Motivation
- Support
dyn QueryGroup
for each query group trait as well asimpl QueryGroup
dyn QueryGroup
will be much more convenient, at the cost of runtime efficiency
- Don't require you to redeclare each query in the final database, just the query groups
User's guide
Declaring a query group
User's will declare query groups by decorating a trait with salsa::query_group
:
#[salsa::query_group(MyGroupStorage)]
trait MyGroup {
// Inputs are annotated with `#[salsa::input]`. For inputs, the final trait will include
// a `set_my_input(&mut self, key: K1, value: V1)` method automatically added,
// as well as possibly other mutation methods.
#[salsa::input]
fn my_input(&self, key: K1) -> V1;
// "Derived" queries are just a getter.
fn my_query(&self, key: K2) -> V2;
}
The query_group
attribute is a procedural macro. It takes as
argument the name of the storage struct for the query group --
this is a struct, generated by the macro, which represents the query
group as a whole. It is attached to a trait definition which defines the
individual queries in the query group.
The macro generates three things that users interact with:
- the trait, here named
MyGroup
. This will be used when writing the definitions for the queries and other code that invokes them. - the storage struct, here named
MyGroupStorage
. This will be used later when constructing the final database. - query structs, named after each query but converted to camel-case
and with the word query (e.g.,
MyInputQuery
formy_input
). These types are rarely needed, but are presently useful for things like invoking the GC. These types violate our rule that "things the user needs to name should be given names by the user", but we choose not to fully resolve this question in this RFC.
In addition, the macro generates a number of structs that users should not have to be aware of. These are described in the "reference guide" section.
Controlling query modes
Input queries, as described in the trait, are specified via the
#[salsa::input]
attribute.
Derived queries can be customized by the following attributes,
attached to the getter method (e.g., fn my_query(..)
):
#[salsa::invoke(foo::bar)]
specifies the path to the function to invoke when the query is called (default ismy_query
).#[salsa::volatile]
specifies a "volatile" query, which is assumed to read untracked input and hence must be re-executed on every revision.#[salsa::dependencies]
specifies a "dependencies-only" query, which is assumed to read untracked input and hence must be re-executed on every revision.
Creating the database
Creating a salsa database works by using a #[salsa::database(..)]
attribute. The ..
content should be a list of paths leading to the
storage structs for each query group that the database will
implement. It is no longer necessary to list the individual
queries. In addition to the salsa::database
query, the struct must
have access to a salsa::Runtime
and implement the salsa::Database
trait. Hence the complete declaration looks roughly like so:
#[salsa::database(MyGroupStorage)]
struct MyDatabase {
runtime: salsa::Runtime<MyDatabase>,
}
impl salsa::Database for MyDatabase {
fn salsa_runtime(&self) -> salsa::Runtime<MyDatabase> {
&self.runtime
}
}
This (procedural) macro generates various impls and types that cause
MyDatabase
to implement all the traits for the query groups it
supports, and which customize the storage in the runtime to have all
the data needed. Users should not have to interact with these details,
and they are written out in the reference guide section.
Reference guide
The goal here is not to give the full details of how to do the
lowering, but to describe the key concepts. Throughout the text, we
will refer to names (e.g., MyGroup
or MyGroupStorage
) that appear
in the example from the User's Guide -- this indicates that we use
whatever name the user provided.
The plumbing::QueryGroup
trait
The QueryGroup
trait is a new trait added to the plumbing module. It
is implemented by the query group storage struct MyGroupStorage
. Its
role is to link from that struct to the various bits of data that the
salsa runtime needs:
pub trait QueryGroup<DB: Database> {
type GroupStorage;
type GroupKey;
}
This trait is implemented by the storage struct (MyGroupStorage
)
in our example. You can see there is a bit of confusing nameing going
on here -- what we call (for user's) the "storage struct" actually
does not wind up containing the true storage (that is, the hasmaps
and things salsa uses). Instead, it merely implements the QueryGroup
trait, which has associated types that lead us to structs we need:
- the group storage contains the hashmaps and things for all the queries in the group
- the group key is an enum with variants for each of the queries. It basically stores all the data needed to identify some particular query value from within the group -- that is, the name of the query, plus the keys used to invoke it.
As described further on, the #[salsa::query_group]
macro is
responsible will generate an impl of this trait for the
MyGroupStorage
struct, along with the group storage and group key
type definitions.
The plumbing::HasQueryGroup<G>
trait
The HasQueryGroup<G>
struct a new trait added to the plumbing
module. It is implemented by the database struct MyDatabase
for
every query group that MyDatabase
supports. Its role is to offer
methods that move back and forth between the context of the full
database to the context of an individual query group:
pub trait HasQueryGroup<G>: Database
where
G: QueryGroup<Self>,
{
/// Access the group storage struct from the database.
fn group_storage(db: &Self) -> &G::GroupStorage;
/// "Upcast" a group key into a database key.
fn database_key(group_key: G::GroupKey) -> Self::DatabaseKey;
}
Here the "database key" is an enum that contains variants for each group. Its role is to take group key and puts it into the context of the entire database.
The Query
trait
The query trait (pre-existing) is extended to include links to its group, and methods to convert from the group storage to the query storage, plus methods to convert from a query key up to the group key:
pub trait Query<DB: Database>: Debug + Default + Sized + 'static {
/// Type that you you give as a parameter -- for queries with zero
/// or more than one input, this will be a tuple.
type Key: Clone + Debug + Hash + Eq;
/// What value does the query return?
type Value: Clone + Debug;
/// Internal struct storing the values for the query.
type Storage: plumbing::QueryStorageOps<DB, Self> + Send + Sync;
/// Associate query group struct.
type Group: plumbing::QueryGroup<
DB,
GroupStorage = Self::GroupStorage,
GroupKey = Self::GroupKey,
>;
/// Generated struct that contains storage for all queries in a group.
type GroupStorage;
/// Type that identifies a particular query within the group + its key.
type GroupKey;
/// Extact storage for this query from the storage for its group.
fn query_storage(group_storage: &Self::GroupStorage) -> &Self::Storage;
/// Create group key for this query.
fn group_key(key: Self::Key) -> Self::GroupKey;
}
Converting to/from the context of the full database generically
Putting all the previous plumbing traits together, this means that given:
- a database
DB
that implementsHasGroupStorage<G>
; - a group struct
G
that implementsQueryGroup<DB>
; and, - and a query struct
Q
that implementsQuery<DB, Group = G>
we can (generically) get the storage for the individual query
Q
out from the database db
via a two-step process:
let group_storage = HasGroupStorage::group_storage(db);
let query_storage = Query::query_storage(group_storage);
Similarly, we can convert from the key to an individual query up to the "database key" in a two-step process:
let group_key = Query::group_key(key);
let db_key = HasGroupStorage::database_key(group_key);
Lowering query groups
The role of the #[salsa::query_group(MyGroupStorage)] trait MyGroup { .. }
macro is primarily to generate the group storage struct and the
impl of QueryGroup
. That involves generating the following things:
- the query trait
MyGroup
itself, but with:salsa::foo
attributes stripped#[salsa::input]
methods expanded to include setters:fn set_my_input(&mut self, key: K1, value__: V1);
fn set_constant_my_input(&mut self, key: K1, value__: V1);
- the query group storage struct
MyGroupStorage
- We also generate an impl of
QueryGroup<DB>
forMyGroupStorage
, linking to the internal strorage struct and group key enum
- We also generate an impl of
- the individual query types
- Ideally, we would use Rust hygiene to hide these struct, but as
that is not currently possible they are given names based on the
queries, but converted to camel-case (e.g.,
MyInputQuery
andMyQueryQuery
). - They implement the
salsa::Query
trait.
- Ideally, we would use Rust hygiene to hide these struct, but as
that is not currently possible they are given names based on the
queries, but converted to camel-case (e.g.,
- the internal group storage struct
- Ideally, we would use Rust hygiene to hide this struct, but as
that is not currently possible it is entitled
MyGroupGroupStorage<DB>
. Note that it is generic with respect to the databaseDB
. This is because the actual query storage requires sometimes storing database key's and hence we need to know the final database type. - It contains one field per query with a link to the storage information
for that query:
my_query: <MyQueryQuery as salsa::plumbing::Query<DB>>::Storage
- (the
MyQueryQuery
type is also generated, see the "individual query types" below)
- The internal group storage struct offers a public, inherent method
for_each_query
:fn for_each_query(db: &DB, op: &mut dyn FnMut(...)
- this is invoked by the code geneated by
#[salsa::database]
when implementing thefor_each_query
method of theplumbing::DatabaseOps
trait
- Ideally, we would use Rust hygiene to hide this struct, but as
that is not currently possible it is entitled
- the group key
- Again, ideally we would use hygiene to hide the name of this struct,
but since we cannot, it is entitled
MyGroupGroupKey
- It is an enum which contains one variant per query with the value being the key:
my_query(<MyQueryQuery as salsa::plumbing::Query<DB>>::Key)
- The group key enum offers a public, inherent method
maybe_changed_after
:fn maybe_changed_after<DB>(db: &DB, db_descriptor: &DB::DatabaseKey, revision: Revision)
- it is invoked when implementing
maybe_changed_after
for the database key
- Again, ideally we would use hygiene to hide the name of this struct,
but since we cannot, it is entitled
Lowering database storage
The #[salsa::database(MyGroup)]
attribute macro creates the links to the query groups.
It generates the following things:
- impl of
HasQueryGroup<MyGroup>
forMyDatabase
- Naturally, there is one such impl for each query group.
- the database key enum
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
__SalsaDatabaseKey
. - The database key is an enum with one variant per query group:
MyGroupStorage(<MyGroupStorage as QueryGroup<MyDatabase>>::GroupKey)
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
- the database storage struct
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
__SalsaDatabaseStorage
. - The database storage struct contains one field per query group, storing
its internal storage:
my_group_storage: <MyGroupStorage as QueryGroup<MyDatabase>>::GroupStorage
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
- impl of
plumbing::DatabaseStorageTypes
forMyDatabase
- This is a plumbing trait that links to the database storage / database key types.
- The
salsa::Runtime
uses it to determine what data to include. The query types use it to determine a database-key.
- impl of
plumbing::DatabaseOps
forMyDatabase
- This contains a
for_each_query
method, which is implemented by invoking, in turn, the inherent methods defined on each query group storage struct.
- This contains a
- impl of
plumbing::DatabaseKey
for the database key enum- This contains a method
maybe_changed_after
. We implement this by matching to get a particular group key, and then invoking the inherent method on the group key struct.
- This contains a method
Alternatives
This proposal results from a fair amount of iteration. Compared to the status quo, there is one primary downside. We also explain a few things here that may not be obvious.
Why include a group storage struct?
You might wonder why we need the MyGroupStorage
struct at all. It is a touch of boilerplate,
but there are several advantages to it:
- You can't attach associated types to the trait itself. This is because the "type version"
of the trait (
dyn MyGroup
) may not be available, since not all traits are dyn-capable. - We try to keep to the principle that "any type that might be named
externally from the macro is given its name by the user". In this
case, the
[salsa::database]
attribute needed to name group storage structs.- In earlier versions, we tried to auto-generate these names, but
this failed because sometimes users would want to
pub use
the query traits and hide their original paths. - (One exception to this principle today are the per-query structs.)
- In earlier versions, we tried to auto-generate these names, but
this failed because sometimes users would want to
- We expect that we can use the
MyGroupStorage
to achieve more encapsulation in the future. While the struct must be public and named from the database, the trait (and query key/value types) actually does not have to be.
Downside: Size of a database key
Database keys now wind up with two discriminants: one to identify the
group, and one to identify the query. That's a bit sad. This could be
overcome by using unsafe code: the idea would be that a group/database
key would be stored as the pair of an integer and a union
. Each
group within a given database would be assigned a range of integer
values, and the unions would store the actual key values. We leave
such a change for future work.
Future possibilities
Here are some ideas we might want to do later.
No generics
We leave generic parameters on the query group trait etc for future work.
Public / private
We'd like the ability to make more details from the query groups private. This will require some tinkering.
Inline query definitions
Instead of defining queries in separate functions, it might be nice to have the option of defining query methods in the trait itself:
#[salsa::query_group(MyGroupStorage)]
trait MyGroup {
#[salsa::input]
fn my_input(&self, key: K1) -> V1;
fn my_query(&self, key: K2) -> V2 {
// define my-query right here!
}
}
It's a bit tricky to figure out how to handle this, so that is left for future work. Also, it would mean that the method body itself is inside of a macro (the procedural macro) which can make IDE integration harder.
Non-query functions
It might be nice to be able to include functions in the trait that are
not queries, but rather helpers that compose queries. This should be
pretty easy, just need a suitable #[salsa]
attribute.
Summary
- We introduce
#[salsa::interned]
queries which convert aKey
type into a numeric index of typeValue
, whereValue
is either the typeInternId
(defined by a salsa) or some newtype thereof. - Each interned query
foo
also produces an inverselookup_foo
method that converts back from theValue
to theKey
that was interned. - The
InternId
type (defined by salsa) is basically a newtype'd integer, but it internally usesNonZeroU32
to enable space-saving optimizations in memory layout. - The
Value
types can be any type that implements thesalsa::InternIndex
trait, also introduced by this RFC. This trait has two methods,from_intern_id
andas_intern_id
. - The interning is integrated into the GC and tracked like any other query, which means that interned values can be garbage-collected, and any computation that was dependent on them will be collected.
Motivation
The need for interning
Many salsa applications wind up needing the ability to construct
"interned keys". Frequently this pattern emerges because we wish to
construct identifiers for things in the input. These identifiers
generally have a "tree-like shape". For example, in a compiler, there
may be some set of input files -- these are enumerated in the inputs
and serve as the "base" for a path that leads to items in the user's
input. But within an input file, there are additional structures, such
as struct
or impl
declarations, and these structures may contain
further structures within them (such as fields or methods). This gives
rise to a path like so that can be used to identify a given item:
PathData = <file-name>
| PathData / <identifier>
These paths could be represented in the compiler with an Arc
, but
because they are omnipresent, it is convenient to intern them instead
and use an integer. Integers are Copy
types, which is convenient,
and they are also small (32 bits typically suffices in practice).
Why interning is difficult today: garbage collection
Unfortunately, integrating interning into salsa at present presents some hard choices, particularly with a long-lived application. You can easily add an interning table into the database, but unless you do something clever, it will simply grow and grow forever. But as the user edits their programs, some paths that used to exist will no longer be relevant -- for example, a given file or impl may be removed, invalidating all those paths that were based on it.
Due to the nature of salsa's recomputation model, it is not easy to detect when paths that used to exist in a prior revision are no longer relevant in the next revision. This is because salsa never explicitly computes "diffs" of this kind between revisions -- it just finds subcomputations that might have gone differently and re-executes them. Therefore, if the code that created the paths (e.g., that processed the result of the parser) is part of a salsa query, it will simply not re-create the invalidated paths -- there is no explicit "deletion" point.
In fact, the same is true of all of salsa's memoized query values. We
may find that in a new revision, some memoized query values are no
longer relevant. For example, in revision R1, perhaps we computed
foo(22)
and foo(44)
, but in the new input, we now only need to
compute foo(22)
. The foo(44)
value is still memoized, we just
never asked for its value. This is why salsa includes a garbage
collector, which can be used to cleanup these memoized values that are
no longer relevant.
But using a garbage collection strategy with a hand-rolled interning scheme is not easy. You could trace through all the values in salsa's memoization tables to implement a kind of mark-and-sweep scheme, but that would require for salsa to add such a mechanism. It might also be quite a lot of tracing! The current salsa GC mechanism has no need to walk through the values themselves in a memoization table, it only examines the keys and the metadata (unless we are freeing a value, of course).
How this RFC changes the situation
This RFC presents an alternative. The idea is to move the interning into salsa itself by creating special "interning queries". Dependencies on these queries are tracked like any other query and hence they integrate naturally with salsa's garbage collection mechanisms.
User's guide
This section covers how interned queries are expected to be used.
Declaring an interned query
You can declare an interned query like so:
#[salsa::query_group]
trait Foo {
#[salsa::interned]
fn intern_path_data(&self, data: PathData) -> salsa::InternId;
]
Query keys. Like any query, these queries can take any number of keys. If multiple
keys are provided, then the interned key is a tuple of each key
value. In order to be interned, the keys must implement Clone
,
Hash
and Eq
.
Return type. The return type of an interned key may be of any type
that implements salsa::InternIndex
: salsa provides an impl for the
type salsa::InternId
, but you can implement it for your own.
Inverse query. For each interning query, we automatically generate
a reverse query that will invert the interning step. It is named
lookup_XXX
, where XXX
is the name of the query. Hence here it
would be fn lookup_intern_path(&self, key: salsa::InternId) -> Path
.
The expected us
Using an interned query is quite straightforward. You simply invoke it
with a key, and you will get back an integer, and you can use the
generated lookup
method to convert back to the original value:
let key = db.intern_path(path_data1);
let path_data2 = db.lookup_intern_path_data(key);
Note that the interned value will be cloned -- so, like all Salsa values, it is best if that is a cheap operation. Interestingly, interning can help to keep recursive, tree-shapes values cheap, because the "pointers" within can be replaced with interned keys.
Custom return types
The return type for an intern query does not have to be a InternId
. It can
be any type that implements the salsa::InternKey
trait:
pub trait InternKey {
/// Create an instance of the intern-key from a `InternId` value.
fn from_intern_id(v: InternId) -> Self;
/// Extract the `InternId` with which the intern-key was created.
fn as_intern_id(&self) -> InternId;
}
Recommended practice
This section shows the recommended practice for using interned keys,
building on the Path
and PathData
example that we've been working
with.
Naming Convention
First, note the recommended naming convention: the intern key is
Foo
and the key's associated data FooData
(in our case, Path
and
PathData
). The intern key is given the shorter name because it is
used far more often. Moreover, other types should never store the full
data, but rather should store the interned key.
Defining the intern key
The intern key should always be a newtype struct that implements
the InternKey
trait. So, something like this:
pub struct Path(InternId);
impl salsa::InternKey for Path {
fn from_intern_id(v: InternId) -> Self {
Path(v)
}
fn as_intern_id(&self) -> InternId {
self.0
}
}
Convenient lookup method
It is often convenient to add a lookup
method to the newtype key:
impl Path {
// Adding this method is often convenient, since you can then
// write `path.lookup(db)` to access the data, which reads a bit better.
pub fn lookup(&self, db: &impl MyDatabase) -> PathData {
db.lookup_intern_path_data(*self)
}
}
Defining the data type
Recall that our paths were defined by a recursive grammar like so:
PathData = <file-name>
| PathData / <identifier>
This recursion is quite typical of salsa applications. The recommended
way to encode it in the PathData
structure itself is to build on other
intern keys, like so:
#[derive(Clone, Hash, Eq, ..)]
enum PathData {
Root(String),
Child(Path, String),
// ^^^^ Note that the recursive reference here
// is encoded as a Path.
}
Note though that the PathData
type will be cloned whenever the value
for an interned key is looked up, and it may also be cloned to store
dependency information between queries. So, as an optimization, you
might prefer to avoid String
in favor of Arc<String>
-- or even
intern the strings as well.
Interaction with the garbage collector
Interned keys can be garbage collected as normal, with one caveat. Even if requested, Salsa will never collect the results generated in the current revision. This is because it would permit the same key to be interned twice in the same revision, possibly mapping to distinct intern keys each time.
Note that if an interned key is collected, its index will be re-used. Salsa's dependency tracking system should ensure that anything incorporating the older value is considered dirty, but you may see the same index showing up more than once in the logs.
Reference guide
Interned keys are implemented using a hash-map that maps from the interned data to its index, as well as a vector containing (for each index) various bits of data. In addition to the interned data, we must track the revision in which the value was interned and the revision in which it was last accessed, to help manage the interaction with the GC. Finally, we have to track some sort of free list that tracks the keys that are being re-used. The current implementation never actually shrinks the vectors and maps from their maximum size, but this might be a useful thing to be able to do (this is effectively a memory allocator, so standard allocation strategies could be used here).
InternId
Presently the InternId
type is implemented to wrap a NonZeroU32
:
pub struct InternId {
value: NonZeroU32,
}
This means that Option<InternId>
(or Option<Path>
, continuing our
example from before) will only be a single word. To accommodate this,
the InternId
constructors require that the value is less than
InternId::MAX
; the value is deliberately set low (currently to
0xFFFF_FF00
) to allow for more sentinel values in the future (Rust
doesn't presently expose the capability of having sentinel values
other than zero on stable, but it is possible on nightly).
Alternatives and future work
None at present.
Summary
Allow to specify a dependency on a query group without making it a super trait.
Motivation
Currently, there's only one way to express that queries from group A
can use
another group B
: namely, B
can be a super-trait of A
:
#[salsa::query_group(AStorage)]
trait A: B {
}
This approach works and allows one to express complex dependencies. However,
this approach falls down when one wants to make a dependency a private
implementation detail: Clients with db: &impl A
can freely call B
methods on
the db
.
This is a bad situation from software engineering point of view: if everything is accessible, it's hard to make distinction between public API and private implementation details. In the context of salsa the situation is even worse, because it breaks "firewall" pattern. It's customary to wrap low-level frequently-changing or volatile queries into higher-level queries which produce stable results and contain invalidation. In the current salsa, however, it's very easy to accidentally call a low-level volatile query instead of a wrapper, introducing and undesired dependency.
User's guide
To specify query dependencies, a requires
attribute should be used:
#[salsa::query_group(SymbolsDatabaseStorage)]
#[salsa::requires(SyntaxDatabase)]
#[salsa::requires(EnvDatabase)]
pub trait SymbolsDatabase {
fn get_symbol_by_name(&self, name: String) -> Symbol;
}
The argument of requires
is a path to a trait. The traits from all requires
attributes are available when implementing the query:
fn get_symbol_by_name(
db: &(impl SymbolsDatabase + SyntaxDatabase + EnvDatabase),
name: String,
) -> Symbol {
// ...
}
However, these traits are not available without explicit bounds:
fn fuzzy_find_symbol(db: &impl SymbolsDatabase, name: String) {
// Can't accidentally call methods of the `SyntaxDatabase`
}
Note that, while the RFC does not propose to add per-query dependencies, query
implementation can voluntarily specify only a subset of traits from requires
attribute:
fn get_symbol_by_name(
// Purposefully don't depend on EnvDatabase
db: &(impl SymbolsDatabase + SyntaxDatabase),
name: String,
) -> Symbol {
// ...
}
Reference guide
The implementation is straightforward and consists of adding traits from
requires
attributes to various where
bounds. For example, we would generate
the following blanket for above example:
impl<T> SymbolsDatabase for T
where
T: SyntaxDatabase + EnvDatabase,
T: salsa::plumbing::HasQueryGroup<SymbolsDatabaseStorage>
{
...
}
Alternatives and future work
The semantics of requires
closely resembles where
, so we could imagine a
syntax based on magical where clauses:
#[salsa::query_group(SymbolsDatabaseStorage)]
pub trait SymbolsDatabase
where ???: SyntaxDatabase + EnvDatabase
{
fn get_symbol_by_name(&self, name: String) -> Symbol;
}
However, it's not obvious what should stand for ???
. Self
won't be ideal,
because supertraits are a sugar for bounds on Self
, and we deliberately want
different semantics. Perhaps picking a magical identifier like DB
would work
though?
One potential future development here is per-query-function bounds, but they can already be simulated by voluntarily requiring less bounds in the implementation function.
Another direction for future work is privacy: because traits from requires
clause are not a part of public interface, in theory it should be possible to
restrict their visibility. In practice, this still hits public-in-private lint,
at least with a trivial implementation.
Summary
Add Least Recently Used values eviction as a supplement to garbage collection.
Motivation
Currently, the single mechanism for controlling memory usage in salsa is garbage collection. Experience with rust-analyzer shown that it is insufficient for two reasons:
-
It's hard to determine which values should be collected. Current implementation in rust-analyzer just periodically clears all values of specific queries.
-
GC is in generally run in-between revision. However, especially after just opening the project, the number of values within a single revision can be high. In other words, GC doesn't really help with keeping peak memory usage under control. While it is possible to run GC concurrently with calculations (and this is in fact what rust-analyzer is doing right now to try to keep high water mark of memory lower), this is highly unreliable an inefficient.
The mechanism of LRU targets both of these weaknesses:
-
LRU tracks which values are accessed, and uses this information to determine which values are actually unused.
-
LRU has a fixed cap on the maximal number of entries, thus bounding the memory usage.
User's guide
It is possible to call set_lru_capacity(n)
method on any non-input query. The
effect of this is that the table for the query stores at most n
values in
the database. If a new value is computed, and there are already n
existing
ones in the database, the least recently used one is evicted. Note that
information about query dependencies is not evicted. It is possible to
change lru capacity at runtime at any time. n == 0
is a special case, which
completely disables LRU logic. LRU is not enabled by default.
Reference guide
Implementation wise, we store a linked hash map of keys, in the recently-used order. Because reads of the queries are considered uses, we now need to write-lock the query map even if the query is fresh. However we don't do this bookkeeping if LRU is disabled, so you don't have to pay for it unless you use it.
A slight complication arises with volatile queries (and, in general, with any query with an untracked input). Similarly to GC, evicting such a query could lead to an inconsistent database. For this reason, volatile queries are never evicted.
Alternatives and future work
LRU is a compromise, as it is prone to both accidentally evicting useful queries and needlessly holding onto useless ones. In particular, in the steady state and without additional GC, memory usage will be proportional to the lru capacity: it is not only an upper bound, but a lower bound as well!
In theory, some deterministic way of evicting values when you for sure don't need them anymore maybe more efficient. However, it is unclear how exactly that would work! Experiments in rust-analyzer show that it's not easy to tame a dynamic crate graph, and that simplistic phase-based strategies fall down.
It's also worth noting that, unlike GC, LRU can in theory be more memory efficient than deterministic memory management. Unlike a traditional GC, we can safely evict "live" objects and recalculate them later. That makes possible to use LRU for problems whose working set of "live" queries is larger than the available memory, at the cost of guaranteed recomputations.
Currently, eviction is strictly LRU base. It should be possible to be smarter and to take size of values and time that is required to recompute them into account when making decisions about eviction.
Summary
- Introduce a user-visibile concept of
Durability
- Adjusting the "durability" of an input can allow salsa to skip a lot of validation work
- Garbage collection -- particularly of interned values -- however becomes more complex
- Possible future expansion: automatic detection of more "durable" input values
Motivation
Making validation faster by optimizing for "durability"
Presently, salsa's validation logic requires traversing all dependencies to check that they have not changed. This can sometimes be quite costly in practice: rust-analyzer for example sometimes spends as much as 90ms revalidating the results from a no-op change. One option to improve this is simply optimization -- salsa#176 for example reduces validation times significantly, and there remains opportunity to do better still. However, even if we are able to traverse the dependency graph more efficiently, it will still be an O(n) process. It would be nice if we could do better.
One observation is that, in practice, there are often input values
that are known to change quite infrequently. For example, in
rust-analyzer, the standard library and crates downloaded from
crates.io are unlikely to change (though changes are possible; see
below). Similarly, the Cargo.toml
file for a project changes
relatively infrequently compared to the sources. We say then that
these inputs are more durable -- that is, they change less frequently.
This RFC proposes a mechanism to take advantage of durability for optimization purposes. Imagine that we have some query Q that depends solely on the standard library. The idea is that we can track the last revision R when the standard library was changed. Then, when traversing dependencies, we can skip traversing the dependencies of Q if it was last validated after the revision R. Put another way, we only need to traverse the dependencies of Q when the standard library changes -- which is unusual. If the standard library does change, for example by user's tinkering with the internal sources, then yes we walk the dependencies of Q to see if it is affected.
User's guide
The durability type
We add a new type salsa::Durability
which has there associated constants:
#[derive(Copy, Clone, Debug, Ord)]
pub struct Durability(..);
impl Durability {
// Values that change regularly, like the source to the current crate.
pub const LOW: Durability;
// Values that change infrequently, like Cargo.toml.
pub const MEDIUM: Durability;
// Values that are not expected to change, like sources from crates.io or the stdlib.
pub const HIGH: Durability;
}
h## Specifying the durability of an input
When setting an input foo
, one can now invoke a method
set_foo_with_durability
, which takes a Durability
as the final
argument:
// db.set_foo(key, value) is equivalent to:
db.set_foo_with_durability(key, value, Durability::LOW);
// This would indicate that `foo` is not expected to change:
db.set_foo_with_durability(key, value, Durability::HIGH);
Durability of interned values
Interned values are always considered Durability::HIGH
. This makes
sense as many queries that only use high durability inputs will also
make use of interning internally. A consequence of this is that they
will not be garbage collected unless you use the specific patterns
recommended below.
Synthetic writes
Finally, we add one new method, synthetic_write(durability)
,
available on the salsa runtime:
db.salsa_runtime().synthetic_write(Durability::HIGH)
As the name suggests, synthetic_write
causes salsa to act as
though a write to an input of the given durability had taken
place. This can be used for benchmarking, but it's also important to
controlling what values get garbaged collected, as described below.
Tracing and garbage collection
Durability affects garbage collection. The SweepStrategy
struct is
modified as follows:
/// Sweeps values which may be outdated, but which have not
/// been verified since the start of the current collection.
/// These are typically memoized values from previous computations
/// that are no longer relevant.
pub fn sweep_outdated(self) -> SweepStrategy;
/// Sweeps values which have not been verified since the start
/// of the current collection, even if they are known to be
/// up to date. This can be used to collect "high durability" values
/// that are not *directly* used by the main query.
///
/// So, for example, imagine a main query `result` which relies
/// on another query `threshold` and (indirectly) on a `threshold_inner`:
///
/// ```
/// result(10) [durability: Low]
/// |
/// v
/// threshold(10) [durability: High]
/// |
/// v
/// threshold_inner(10) [durability: High]
/// ```
///
/// If you modify a low durability input and then access `result`,
/// then `result(10)` and its *immediate* dependencies will
/// be considered "verified". However, because `threshold(10)`
/// has high durability and no high durability input was modified,
/// we will not verify *its* dependencies, so `threshold_inner` is not
/// verified (but it is also not outdated).
///
/// Collecting unverified things would therefore collect `threshold_inner(10)`.
/// Collecting only *outdated* things (i.e., with `sweep_outdated`)
/// would collect nothing -- but this does mean that some high durability
/// queries that are no longer relevant to your main query may stick around.
///
/// To get the most precise garbage collection, do a synthetic write with
/// high durability -- this will force us to verify *all* values. You can then
/// sweep unverified values.
pub fn sweep_unverified(self) -> SweepStrategy;
Reference guide
Review: The need for GC to collect outdated values
In general, salsa's lazy validation scheme can lead to the accumulation of garbage that is no longer needed. Consider a query like this one:
fn derived1(db: &impl Database, start: usize) {
let middle = self.input(start);
self.derived2(middle)
}
Now imagine that, on some particular run, we compute derived1(22)
:
derived1(22)
- executes
input(22)
, which returns44
- then executes
derived2(44)
- executes
The end result of this execution will be a dependency graph like:
derived1(22) -> derived2(44)
|
v
input(22)
Now. imagine that the user modifies input(22)
to have the value 45
.
The next time derived1(22)
executes, it will load input(22)
as before,
but then execute derived2(45)
. This leaves us with a dependency
graph as follows:
derived1(22) -> derived2(45)
|
v
input(22) derived2(44)
Notice that we still see derived2(44)
in the graph. This is because
we memoized the result in last round and then simply had no use for it
in this round. The role of GC is to collect "outdated" values like
this one.
###Review: Tracing and GC before durability
In the absence of durability, when you execute a query Q in some new revision where Q has not previously executed, salsa must trace back through all the queries that Q depends on to ensure that they are still up to date. As each of Q's dependencies is validated, we mark it to indicate that it has been checked in the current revision (and thus, within a particular revision, we would never validate or trace a particular query twice).
So, to continue our example, when we first executed derived1(22)
in revision R1, we might have had a graph like:
derived1(22) -> derived2(44)
[verified: R1] [verified: R1]
|
v
input(22)
Now, after we modify input(22)
and execute derived1(22)
again, we
would have a graph like:
derived1(22) -> derived2(45)
[verified: R2] [verified: R2]
|
v
input(22) derived2(44)
[verified: R1]
Note that derived2(44)
, the outdated value, never had its "verified"
revision updated, because we never accessed it.
Salsa leverages this validation stamp to serve as the "marking" phase of a simple mark-sweep garbage collector. The idea is that the sweep method can collect any values that are "outdated" (whose "verified" revision is less than the current revision).
The intended model is that one can do a "mark-sweep" style garbage collection like so:
// Modify some input, triggering a new revision.
db.set_input(22, 45);
// The **mark** phase: execute the "main query", with the intention
// that we wish to retain all the memoized values needed to compute
// this main query, but discard anything else. For example, in an IDE
// context, this might be a "compute all errors" query.
db.derived1(22);
// The **sweep** phase: discard anything that was not traced during
// the mark phase.
db.sweep_all(...);
In the case of our example, when we execute sweep_all
, it would
collect derived2(44)
.
Challenge: Durability lets us avoid tracing
This tracing model is affected by the move to durability. Now, if some derived value has a high durability, we may skip tracing its descendants altogether. This means that they would never be "verified" -- that is, their "verified date" would never be updated.
This is why we modify the definition of "outdated" as follows:
- For a query value
Q
with durabilityD
, letR_lc
be the revision when values of durabilityD
last changed. LetR_v
be the revision whenQ
was last verified. Q
is outdated ifR_v < R_lc
.- In other words, if
Q
may have changed since it was last verified.
- In other words, if
Collecting interned and untracked values
Most values can be collected whenever we like without influencing correctness. However, interned values and those with untracked dependencies are an exception -- they can only be collected when outdated. This is because their values may not be reproducible -- in other words, re-executing an interning query (or one with untracked dependencies, which can read arbitrary program state) twice in a row may produce a different value. In the case of an interning query, for example, we may wind up using a different integer than we did before. If the query is outdated, this is not a problem: anything that dependend on its result must also be outdated, and hence would be re-executed and would observe the new value. But if the query is not outdated, then we could get inconsistent result.s
Alternatives and future work
Rejected: Arbitrary durabilities
We considered permitting arbitrary "levels" of durability -- for example, allowing the user to specify a number -- rather than offering just three. Ultimately it seemed like that level of control wasn't really necessary and that having just three levels would be sufficient and simpler.
Rejected: Durability lattices
We also considered permitting a "lattice" of durabilities -- e.g., to mirror the crate DAG in rust-analyzer -- but this is tricky because the lattice itself would be dependent on other inputs.
Dynamic databases
Metadata
- Author: nikomatsakis
- Date: 2020-06-29
- Introduced in: salsa-rs/salsa#1 (please update once you open your PR)
Summary
- Retool Salsa's setup so that the generated code for a query group is not
dependent on the final database type, and interacts with the database only
through
dyn
trait values. - This imposes a certain amount of indirecton but has the benefit that when a query group definition changes, less code must be recompiled as a result.
- Key changes include:
- Database keys are "interned" in the database to produce a
DatabaseKeyIndex
. - The values for cached query are stored directly in the hashtable instead of
in an
Arc
. There is still an Arc per cached query, but it stores the dependency information. - The various traits are changed to make
salsa::Database
dyn-safe. Invoking methods on the runtime must now go through asalsa::Runtime
trait. - The
salsa::requires
functionality is removed.
- Database keys are "interned" in the database to produce a
- Upsides of the proposal:
- Potentially improved recompilation time. Minimal code is regenerated.
- Removing the
DatabaseData
unsafe code hack that was required by slots.
- Downsides of the proposal:
- The effect on runtime performance must be measured.
DatabaseKeyIndex
values will leak, as we propose no means to reclaim them. However, the same is true ofSlot
values today.- Storing values for the tables directly in the hashtable makes it less obvious how we would return references to them in a safe fashion (before, I had planned to have a separate module that held onto the Arc for the slot, so we were sure the value would not be deallocated; one can still imagine supporting this feature, but it would require some fancier unsafe code reasoning, although it would be more efficient.)
- The
salsa::requires
functionality is removed.
Motivation
Under the current salsa setup, all of the "glue code" that manages cache
invalidation and other logic is ultimately parameterized by a type DB
that
refers to the full database. The problem is that, if you consider a typical
salsa crate graph, the actual value for that type is not available until the
final database crate is compiled:
graph TD; Database["Crate that defines the database"]; QueryGroupA["Crate with query group A"]; QueryGroupB["Crate with query group B"]; SalsaCrate["the `salsa` crate"]; Database -- depends on --> QueryGroupA; Database -- depends on --> QueryGroupB; QueryGroupA -- depends on --> SalsaCrate; QueryGroupB -- depends on --> SalsaCrate;
The result is that we do not actually compile a good part of the code from
QueryGroupA
or QueryGroupB
until we build the final database crate.
What you can do today: dyn traits
What you can do today is to use define a "dyn-compatible" query group
trait and then write your derived functions using a dyn
type as the
argument:
#[salsa::query_group(QueryGroupAStorage)]
trait QueryGroupA {
fn derived(&self, key: usize) -> usize;
}
fn derived(db: &dyn QueryGroupA, key: usize) -> usize {
key * 2
}
This has the benefit that the derived
function is not generic. However, it's
still true that the glue code salsa makes will be generic over a DB
type --
this includes the impl of QueryGroupA
but also the Slot
and other machinery.
This means that even if the only change is to query group B, in a different
crate, the glue code for query group A ultimately has to be recompiled whenever
the Database
crate is rebuilt (though incremental compilation may help here).
Moreover, as reported in salsa-rs/salsa#220, measurements of rust-analyzer
suggest that this code may be duplicated and accounting for more of the binary
than we would expect.
FIXME: I'd like to have better measurements on the above!
Our goal
The primary goal of this RFC is to make it so that the glue code we generate for query groups is not dependent on the database type, thus enabling better incremental rebuilds.
User's guide
Most of the changes in this RFC are "under the hood". But there are various user visibile changes proposed here.
All query groups must be dyn safe
The largest one is that all Salsa query groups must now be dyn-safe. The existing salsa query methods are all dyn-safe, so what this really implies is that one cannot have super-traits that use generic methods or other things that are not dyn safe. For example, this query group would be illegal:
#[salsa::query_group(QueryGroupAStorage)]
trait QueryGroupA: Foo {
}
trait Foo {
fn method<T>(t: T) { }
}
We could support query groups that are not dyn safe, but it would require us to have two "similar but different" ways of generating plumbing, and I'm not convinced that it's worth it. Moreover, it would require some form of opt-in so that would be a measure of user complexity as well.
All query functions must take a dyn database
You used to be able to implement queries by using impl MyDatabase
, like so:
fn my_query(db: &impl MyDatabase, ...) { .. }
but you must now use dyn MyDatabase
:
fn my_query(db: &dyn MyDatabase, ...) { .. }
Databases embed a Storage<DB>
with a fixed field name
The "Hello World" database becomes the following:
#[salsa::database(QueryGroup1, ..., QueryGroupN)]
struct MyDatabase {
storage: salsa::Storage<Self>
}
impl salsa::Database for MyDatabase {}
In particular:
- You now embed a
salsa::Storage<Self>
instead of asalsa::Runtime<Self>
- The field must be named
storage
by default; we can include a#[salsa::storge_field(xxx)]
annotation to change that default if desired.- Or we could scrape the struct declaration and infer it, I suppose.
- You no longer have to define the
salsa_runtime
andsalsa_runtime_mut
methods, they move to theDatabaseOps
trait and are manually implemented by doingself.storage.runtime()
and so forth.
Why these changes, and what is this Storage
struct? This is because the actual
storage for queries is moving outside of the runtime. The Storage struct just
combines the Runtime
(whose type no longer references DB
directly) with an
Arc<DB::Storage>
. The full type of Storage
, since it includes the database
type, cannot appear in any public interface, it is just used by the various
implementations that are created by salsa::database
.
Instead of db.query(Q)
, you write Q.in_db(&db)
As a consequence of the previous point, the existing query
and query_mut
methods on the salsa::Database
trait are changed to methods on the query types
themselves. So instead of db.query(SomeQuery)
, one would write
SomeQuery.in_db(&db)
(or in_db_mut
). This both helps by making the
salsa::Database
trait dyn-safe and also works better with the new use of dyn
types, since it permits a coercion from &db
to the appropriate dyn
database
type at the point of call.
The salsa-event mechanism will move to dynamic dispatch
A further consequence is that the existing salsa_event
method will be
simplified and made suitable for dynamic dispatch. It used to take a closure
that would produce the event if necessary; it now simply takes the event itself.
This is partly because events themselves no longer contain complex information:
they used to have database-keys, which could require expensive cloning, but they
now have simple indices.
fn salsa_event(&self, event: Event) {
#![allow(unused_variables)]
}
This may imply some runtime cost, since various parts of the machinery invoke
salsa_event
, and those calls will now be virtual calls. They would previously
have been static calls that would likely have been optimized away entirely.
It is however possible that ThinLTO or other such optimization could remove those calls, this has not been tested, and in any case the runtime effects are not expected to be high, since all the calls will always go to the same function.
The salsa::requires
function is removed
We currently offer a feature for "private" dependencies between query groups
called #[salsa::requires(ExtraDatabase)]
. This then requires query
functions to be written like:
fn query_fn(db: &impl Database + ExtraDatabase, ...) { }
This format is not compatible with dyn
, so this feature is removed.
Reference guide
Example
To explain the proposal, we'll use the Hello World example, lightly adapted:
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld: salsa::Database {
#[salsa::input]
fn input_string(&self, key: ()) -> Arc<String>;
fn length(&self, key: ()) -> usize;
}
fn length(db: &dyn HelloWorld, (): ()) -> usize {
// Read the input string:
let input_string = db.input_string(());
// Return its length:
input_string.len()
}
#[salsa::database(HelloWorldStorage)]
struct DatabaseStruct {
runtime: salsa::Runtime<DatabaseStruct>,
}
impl salsa::Database for DatabaseStruct {
fn salsa_runtime(&self) -> &salsa::Runtime<Self> {
&self.runtime
}
fn salsa_runtime_mut(&mut self) -> &mut salsa::Runtime<Self> {
&mut self.runtime
}
}
Identifying queries using the DatabaseKeyIndex
We introduce the following struct that represents a database key using a series of indices:
struct DatabaseKeyIndex {
/// Identifies the query group.
group_index: u16,
/// Identifies the query within the group.
query_index: u16,
/// Identifies the key within the query.
key_index: u32,
}
This struct allows the various query group structs to refer to database keys
without having to use a type like DB::DatabaseKey
that is dependent on the
DB
.
The group/query indices will be assigned by the salsa::database
and
salsa::query_group
macros respectively. When query group storage is created,
it will be passed in its group index by the database. Each query will be able to
access its query-index through the Query
trait, as they are statically known
at the time that the query is compiled (the group index, in contrast, depends on
the full set of groups for the database).
The key index can be assigned by the query as it executes without any central
coordination. Each query will use a IndexMap
(from the indexmap
crate)
mapping Q::Key -> QueryState
. Inserting new keys into this map also creates
new indices, and it is possible to index into the map in O(1) time later to
obtain the state (or key) from a given query. This map replaces the existing
Q::Key -> Arc<Slot<..>>
map that is used today.
One notable implication: we cannot remove entries from the query index map (e.g., for GC) because that would invalidate the existing indices. We can however replace the query-state with a "not computed" value. This is not new: slots already take this approach today. In principle, we could extend the tracing GC to permit compressing and perhaps even rewriting indices, but it's not clear that this is a problem in practice.
The DatabaseKeyIndex
also supports a debug
method that returns a value with
a human readable debug!
output, so that you can do debug!("{:?}", index.debug(db))
. This works by generating a fmt_debug
method that is
supported by the various query groups.
The various query traits are not generic over a database
Today, the Query
, QueryFunction
, and QueryGroup
traits are generic over
the database DB
, which allows them to name the final database type and
associated types derived from it. In the new scheme, we never want to do that,
and so instead they will now have an associated type, DynDb
, that maps to the
dyn
version of the query group trait that the query is associated with.
Therefore QueryFunction
for example can become:
pub trait QueryFunction: Query {
fn execute(db: &<Self as QueryDb<'_>>::DynDb, key: Self::Key) -> Self::Value;
fn recover(db: &<Self as QueryDb<'_>>::DynDb, cycle: &[DB::DatabaseKey], key: &Self::Key) -> Option<Self::Value> {
let _ = (db, cycle, key);
None
}
}
Storing query results and tracking dependencies
In today's setup, we have all the data for a particular query stored in a
Slot<Q, DB, MP>
, and these slots hold references to one another to track
dependencies. Because the type of each slot is specific to the particular query
Q
, the references between slots are done using a Arc<dyn DatabaseSlot<DB>>
handle. This requires some unsafe hacks, including the DatabaseData
associated
type.
This RFC proposes to alter this setup. Dependencies will store a DatabaseIndex
instead. This means that validating dependencies is less efficient, as we no
longer have a direct pointer to the dependency information but instead must
execute three index lookups (one to find the query group, one to locate the
query, and then one to locate the key). Similarly the LRU list can be reverted
to a LinkedHashMap
of indices.
We may tinker with other approaches too: the key change in the RFC is that we
do not need to store a DB::DatabaseKey
or Slot<..DB..>
, but instead can use
some type for dependencies that is independent of the dtabase type DB
.
Dispatching methods from a DatabaseKeyIndex
There are a number of methods that can be dispatched through the database
interface on a DatabaseKeyIndex
. For example, we already mentioned
fmt_debug
, which emits a debug representation of the key, but there is also
maybe_changed_after
, which checks whether the value for a given key may have
changed since the given revision. Each of these methods is a member of the
DatabaseOps
trait and they are dispatched as follows.
First, the #[salsa::database]
procedural macro is the one which
generates the DatabaseOps
impl for the database. This base method
simply matches on the group index to determine which query group
contains the key, and then dispatches to an inherent
method defined on the appropriate query group struct:
impl salsa::plumbing::DatabaseOps for DatabaseStruct {
// We'll use the `fmt_debug` method as an example
fn fmt_debug(&self, index: DatabaseKeyIndex, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match index.group_index() {
0 => {
let storage = <Self as HasQueryGroup<HelloWorld>>::group_storage(self);
storage.fmt_debug(index, fmt)
}
_ => panic!("Invalid index")
}
}
}
The query group struct has a very similar inherent method that dispatches based on the query index and invokes a method on the query storage:
impl HelloWorldGroupStorage__ {
// We'll use the `fmt_debug` method as an example
fn fmt_debug(&self, index: DatabaseKeyIndex, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match index.query_index() {
0 => self.appropriate_query_field.fmt_debug(index, fmt),
1 => ...
_ => panic!("Invalid index")
}
}
}
Finally, the query storage can use the key index to lookup the appropriate
data from the FxIndexSet
.
Wrap runtime in a Storage<DB>
type
The Salsa runtime is currently Runtime<DB>
but it will change to just
Runtime
and thus not be generic over the database. This means it can be
referenced directly by query storage implementations. This is very useful
because it allows that type to have a number of pub(crate)
details that query
storage implementations make use of but which are not exposed as part of our
public API.
However, the Runtime
crate used to contain a DB::Storage
, and without the
DB
in its type, it no longer can. Therefore, we will introduce a new type
Storage<DB>
type which is defined like so:
pub struct Storage<DB: DatabaseImpl> {
query_store: Arc<DB::DatabaseStorage>,
runtime: Runtime,
}
impl<DB> Storage<DB> {
pub fn query_store(&self) -> &DB::DatabaseStorage {
&self.query_store
}
pub fn salsa_runtime(&self) -> &Runtime {
&self.runtime
}
pub fn salsa_runtime_mut(&mut self) -> &mut Runtime {
&self.runtime
}
/// Used for parallel queries
pub fn snapshot(&self) -> Self {
Storage {
query_store: query_store.clone(),
runtime: runtime.snapshot(),
}
}
}
The user is expected to include a field storage: Storage<DB>
in their database
definition. The salsa::database
procedural macro, when it generates impls of
traits like HasQueryGroup
, will embed code like self.storage
that looks for
that field.
salsa_runtime
methods move to DatabaseOps
trait
The salsa_runtime
methods used to be manually implemented by users to define
the field that contains the salsa runtime. This was always boilerplate. The
salsa::database
macro now handles that job by defining them to invoke the
corresponding methods on Storage
.
Salsa database trait becomes dyn safe
Under this proposal, the Salsa database must be dyn safe. This implies that we have to make a few changes:
- The
query
andquery_mut
methods move to an extension trait. - The
DatabaseStorageTypes
supertrait is removed (that trait is renamed and altered, see next section). - The
salsa_event
method changes, as described in the User's guide.
Salsa database trait requires 'static
, at least for now
One downside of this proposal is that the salsa::Database
trait now has a
'static
bound. This is a result of the lack of GATs -- in particular, the
queries expect a <Q as QueryDb<'_>>::DynDb
as argument. In the query definition, we have
something like type DynDb = dyn QueryGroupDatabase
, which in turn defaults to
dyn::QueryGroupDatabase + 'static
.
At the moment, this limitation is harmless, since salsa databases don't support generic parameters. But it would be good to lift in the future, especially as we would like to support arena allocation and other such patterns. The limitation could be overcome in the future by:
- converting to a GAT like
DynDb<'a>
, if those were available; - or by simulating GATs by introducing a trait to carry the
DynDb
definition, likeQueryDb<'a>
, whereQuery
has the supertraitfor<'a> Self: QueryDb<'a>
. This would permit theDynDb
type to be referenced by writing<Q as QueryDb<'a>>::DynDb
.
Salsa query group traits are extended with Database
and HasQueryGroup
supertrait
When #[salsa::query_group]
is applied to a trait, we currently generate a copy
of the trait that is "more or less" unmodified (although we sometimes add
additional synthesized methods, such as the set
method for an input). Under
this proposal, we will also introduce a HasQueryGroup<QueryGroupStorage>
supertrait. Therefore the following input:
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld { .. }
will generate a trait like:
trait HelloWorld:
salsa::Database +
salsa::plumbing::HasQueryGroup<HelloWorldStorage>
{
..
}
The Database
trait is the standard salsa::Database
trait and contains
various helper methods. The HasQueryGroup
trait is implemented by the database
and defines various plumbing methods that are used by the storage
implementations.
One downside of this is that salsa::Database
methods become available on the
trait; we might want to give internal plumbing methods more obscure names.
Bounds were already present on the blanket impl of salsa query group trait
The new bounds that are appearing on the trait were always present on the
blanket impl that the salsa::query_group
procedural macro generated, which
looks like so (and continues unchanged under this RFC):
impl<DB> HelloWorld for DB
where
DB: salsa::Database +
DB: salsa::plumbing::HasQueryGroup<HelloWorldStorage>
{
...
}
The reason we generate the impl is so that the salsa::database
procedural
macro can simply create the HasQueryGroup
impl and never needs to know the
name of the HelloWorld
trait, only the HelloWorldStorage
type.
Storage types no longer parameterized by the database
Today's storage types, such as Derived
, are parameterized over both a query Q
and a DB
(along with the memoization policy MP
):
// Before this RFC:
pub struct DerivedStorage<DB, Q, MP>
where
Q: QueryFunction<DB>,
DB: Database + HasQueryGroup<Q::Group>,
MP: MemoizationPolicy<DB, Q>,
The DB
parameter should no longer be needed after the previously described
changes are made, so that the signature looks like:
// Before this RFC:
pub struct DerivedStorage<Q, MP>
where
Q: QueryFunction,
MP: MemoizationPolicy<DB, Q>,
Alternatives and future work
The 'linch-pin' of this design is the DatabaseKeyIndex
type, which allows for
signatures to refer to "any query in the system" without reference to the DB
type. The biggest downside of the system is that this type is an integer which
then requires a tracing GC to recover index values. The primary alternative
would be to use an Arc
-like scheme,but this has some severe downsides:
- Requires reference counting, allocation
- Hashing and equality comparisons have more data to process versus an integer
- Equality comparisons must still be deep since you may have older and newer keys co-existing
- Requires a
Arc<dyn DatabaseKey>
-like setup, which then encounters the problems that this type is notSend
orSync
, leading to hacks like theDB::DatabaseData
we use today.
Opinionated cancelation
Metadata
- Author: nikomatsakis
- Date: 2021-05-15
- Introduced in: salsa-rs/salsa#265
Summary
- Define stack unwinding as the one true way to handle cancelation in salsa queries
- Modify salsa queries to automatically initiate unwinding when they are canceled
- Use a distinguished value for this panic so that people can test if the panic was a result of cancelation
Motivation
Salsa's database model is fundamentally like a read-write lock. There is always a single master copy of the database which supports writes, and any number of concurrent snapshots that support reads. Whenever a write to the database occurs, any queries executing in those snapshots are considered canceled, because their results are based on stale data. The write blocks until they complete before it actually takes effect. It is therefore advantageous for those reads to complete as quickly as possible.
cancelation in salsa is currently quite minimal. Effectively, a flag becomes true, and queries can manually check for this flag. This is easy to forget to do. Moreover, we support two modes of cancelation: you can either use Result
values or use unwinding. In practice, though, there isn't much point to using Result
: you can't really "recover" from cancelation.
The largest user of salsa, rust-analyzer, uses a fairly opinionated and aggressive form of cancelation:
- Every query is instrumented, using salsa's various hooks, to check for cancelation before it begins.
- If a query is canceled, then it immediately panics, using a special sentinel value.
- Any worker threads holding a snapshot of the DB recognize this value and go back to waiting for work.
We propose to make this model of cancelation the only model of cancelation.
User's guide
When you do a write to the salsa database, that write will block until any queries running in background threads have completed. You really want those queries to complete quickly, though, because they are now operating on stale data and their results are therefore not meaningful. To expedite the process, salsa will cancel those queries. That means that the queries will panic as soon as they try to execute another salsa query. Those panics occur using a sentinel value that you can check for if you wish. If you have a query that contains a long loop which does not execute any intermediate queries, salsa won't be able to cancel it automatically. You may wish to check for cancelation yourself by invoking the unwind_if_cancelled
method.
Reference guide
The changes required to implement this RFC are as follows:
- Remove on
is_current_revision_canceled
. - Introduce a sentinel cancellation token that can be used with
resume_unwind
- Introduce a
unwind_if_cancelled
method into theDatabase
which checks whether cancelation has occured and panics if so.- This method also triggers a
salsa_event
callback. - This should probably be inline for the
if
with an outlined function to do the actual panic.
- This method also triggers a
- Modify the code for the various queries to invoke
unwind_if_cancelled
when they are invoked or validated.
Frequently asked questions
Isn't it hard to write panic-safe code?
It is. However, the salsa runtime is panic-safe, and all salsa queries must already avoid side-effects for other reasons, so in our case, being panic-safe happens by default.
Isn't recovering from panics a bad idea?
No. It's a bad idea to do "fine-grained" recovery from panics, but catching a panic at a high-level of your application and soldiering on is actually exactly how panics were meant to be used. This is especially true in salsa, since all code is already panic-safe.
Does this affect users of salsa who do not use threads?
No. Cancelation in salsa only occurs when there are parallel readers and writers.
What about people using panic-as-abort?
This does mean that salsa is not compatible with panic-as-abort. Strictly speaking, you could still use salsa in single-threaded mode, so that cancelation is not possible.
Remove garbage collection
Metadata
- Author: nikomatsakis
- Date: 2021-06-06
- Introduced in: https://github.com/salsa-rs/salsa/pull/267
Summary
- Remove support for tracing garbage collection
- Make interned keys immortal, for now at least
Motivation
Salsa has traditionally supported "tracing garbage collection", which allowed the user to remove values that were not used in the most recent revision. While this feature is nice in theory, it is not used in practice. Rust Analyzer, for example, prefers to use the LRU mechanism, which offers stricter limits. Considering that it is not used, supporting the garbage collector involves a decent amount of complexity and makes it harder to experiment with Salsa's structure. Therefore, this RFC proposes to remove support for tracing garbage collection. If desired, it can be added back at some future date in an altered form.
User's guide
The primary effect for users is that the various 'sweep' methods from the database and queries are removed. The only way to control memory usage in Salsa now is through the LRU mechanisms.
Reference guide
Removing the GC involves deleting a fair bit of code. The most interesting and subtle code is in the interning support. Previously, interned keys tracked the revision in which they were interned, but also the revision in which they were last accessed. when the sweeping method would run, any interned keys that had not been accessed in the current revision were collected. Since we permitted the GC to run with the read only, we had to be prepared for accesses to interned keys to occur concurrently with the GC, and thus for the possibility that various operations could fail. This complexity is removed, but it means that there is no way to remove interned keys at present.
Frequently asked questions
Why not just keep the GC?
The complex.
Are any users relying on the sweeping functionality?
Hard to say for sure, but none that we know of.
Don't we want some mechanism to control memory usage?
Yes, but we don't quite know what it looks like. LRU seems to be adequate in practice for present.
What about for interned keys in particular?
We could add an LRU-like mechanism to interning.
Description/title
Metadata
- Author: nikomatsakis
- Date: 2021-10-31
- Introduced in: https://github.com/salsa-rs/salsa/pull/285
Summary
- Permit cycle recovery as long as at least one participant has recovery enabled.
- Modify cycle recovery to take a
&Cycle
. - Introduce
Cycle
type that carries information about a cycle and lists participants in a deterministic order.
Motivation
Cycle recovery has been found to have some subtle bugs that could lead to panics. Furthermore, the existing cycle recovery APIs require all participants in a cycle to have recovery enabled and give limited and non-deterministic information. This RFC tweaks the user exposed APIs to correct these shortcomings. It also describes a major overhaul of how cycles are handled internally.
User's guide
By default, cycles in the computation graph are considered a "programmer bug" and result in a panic. Sometimes, though, cycles are outside of the programmer's control. Salsa provides mechanisms to recover from cycles that can help in those cases.
Default cycle handling: panic
By default, when Salsa detects a cycle in the computation graph, Salsa will panic with a salsa::Cycle
as the panic value. Your queries should not attempt to catch this value; rather, the salsa::Cycle
is meant to be caught by the outermost thread, which can print out information from it to diagnose what went wrong. The Cycle
type offers a few methods for inspecting the participants in the cycle:
participant_keys
-- returns an iterator over theDatabaseKeyIndex
for each participant in the cycle.all_participants
-- returns an iterator overString
values for each participant in the cycle (debug output).unexpected_participants
-- returns an iterator overString
values for each participant in the cycle that doesn't have recovery information (see next section).
Cycle
implements Debug
, but because the standard trait doesn't provide access to the database, the output can be kind of inscrutable. To get more readable Debug
values, use the method cycle.debug(db)
, which returns an impl Debug
that is more readable.
Cycle recovery
Panicking when a cycle occurs is ok for situations where you believe a cycle is impossible. But sometimes cycles can result from illegal user input and cannot be statically prevented. In these cases, you might prefer to gracefully recover from a cycle rather than panicking the entire query. Salsa supports that with the idea of cycle recovery.
To use cycle recovery, you annotate potential participants in the cycle with a #[salsa::recover(my_recover_fn)]
attribute. When a cycle occurs, if any participant P has recovery information, then no panic occurs. Instead, the execution of P is aborted and P will execute the recovery function to generate its result. Participants in the cycle that do not have recovery information continue executing as normal, using this recovery result.
The recovery function has a similar signature to a query function. It is given a reference to your database along with a salsa::Cycle
describing the cycle that occurred; it returns the result of the query. Example:
#![allow(unused)] fn main() { fn my_recover_fn( db: &dyn MyDatabase, cycle: &salsa::Cycle, ) -> MyResultValue }
The db
and cycle
argument can be used to prepare a useful error message for your users.
Important: Although the recovery function is given a db
handle, you should be careful to avoid creating a cycle from within recovery or invoking queries that may be participating in the current cycle. Attempting to do so can result in inconsistent results.
Figuring out why recovery did not work
If a cycle occurs and some of the participant queries have #[salsa::recover]
annotations and others do not, then the query will be treated as irrecoverable and will simply panic. You can use the Cycle::unexpected_participants
method to figure out why recovery did not succeed and add the appropriate #[salsa::recover]
annotations.
Reference guide
This RFC accompanies a rather long and complex PR with a number of changes to the implementation. We summarize the most important points here.
Cycles
Cross-thread blocking
The interface for blocking across threads now works as follows:
- When one thread
T1
wishes to block on a queryQ
being executed by another threadT2
, it invokesRuntime::try_block_on
. This will check for cycles. Assuming no cycle is detected, it will blockT1
untilT2
has completed withQ
. At that point,T1
reawakens. However, we don't know the result of executingQ
, soT1
now has to "retry". Typically, this will result in successfully reading the cached value. - While
T1
is blocking, the runtime moves its query stack (aVec
) into the shared dependency graph data structure. WhenT1
reawakens, it recovers ownership of its query stack before returning fromtry_block_on
.
Cycle detection
When a thread T1
attempts to execute a query Q
, it will try to load the value for Q
from the memoization tables. If it finds an InProgress
marker, that indicates that Q
is currently being computed. This indicates a potential cycle. T1
will then try to block on the query Q
:
- If
Q
is also being computed byT1
, then there is a cycle. - Otherwise, if
Q
is being computed by some other threadT2
, we have to check whetherT2
is (transitively) blocked onT1
. If so, there is a cycle.
These two cases are handled internally by the Runtime::try_block_on
function. Detecting the intra-thread cycle case is easy; to detect cross-thread cycles, the runtime maintains a dependency DAG between threads (identified by RuntimeId
). Before adding an edge T1 -> T2
(i.e., T1
is blocked waiting for T2
) into the DAG, it checks whether a path exists from T2
to T1
. If so, we have a cycle and the edge cannot be added (then the DAG would not longer be acyclic).
When a cycle is detected, the current thread T1
has full access to the query stacks that are participating in the cycle. Consider: naturally, T1
has access to its own stack. There is also a path T2 -> ... -> Tn -> T1
of blocked threads. Each of the blocked threads T2 ..= Tn
will have moved their query stacks into the dependency graph, so those query stacks are available for inspection.
Using the available stacks, we can create a list of cycle participants Q0 ... Qn
and store that into a Cycle
struct. If none of the participants Q0 ... Qn
have cycle recovery enabled, we panic with the Cycle
struct, which will trigger all the queries on this thread to panic.
Cycle recovery via fallback
If any of the cycle participants Q0 ... Qn
has cycle recovery set, we recover from the cycle. To help explain how this works, we will use this example cycle which contains three threads. Beginning with the current query, the cycle participants are QA3
, QB2
, QB3
, QC2
, QC3
, and QA2
.
The cyclic
edge we have
failed to add.
:
A : B C
:
QA1 v QB1 QC1
┌► QA2 ┌──► QB2 ┌─► QC2
│ QA3 ───┘ QB3 ──┘ QC3 ───┐
│ │
└───────────────────────────────┘
Recovery works in phases:
- Analyze: As we enumerate the query participants, we collect their collective inputs (all queries invoked so far by any cycle participant) and the max changed-at and min duration. We then remove the cycle participants themselves from this list of inputs, leaving only the queries external to the cycle.
- Mark: For each query Q that is annotated with
#[salsa::recover]
, we mark it and all of its successors on the same thread by setting itscycle
flag to thec: Cycle
we constructed earlier; we also reset its inputs to the collective inputs gathering during analysis. If those queries resume execution later, those marks will trigger them to immediately unwind and use cycle recovery, and the inputs will be used as the inputs to the recovery value.- Note that we mark all the successors of Q on the same thread, whether or not they have recovery set. We'll discuss later how this is important in the case where the active thread (A, here) doesn't have any recovery set.
- Unblock: Each blocked thread T that has a recovering query is forcibly reawoken; the outgoing edge from that thread to its successor in the cycle is removed. Its condvar is signalled with a
WaitResult::Cycle(c)
. When the thread reawakens, it will see that and start unwinding with the cyclec
. - Handle the current thread: Finally, we have to choose how to have the current thread proceed. If the current thread includes any cycle with recovery information, then we can begin unwinding. Otherwise, the current thread simply continues as if there had been no cycle, and so the cyclic edge is added to the graph and the current thread blocks. This is possible because some other thread had recovery information and therefore has been awoken.
Let's walk through the process with a few examples.
Example 1: Recovery on the detecting thread
Consider the case where only the query QA2 has recovery set. It and QA3 will be marked with their cycle
flag set to c: Cycle
. Threads B and C will not be unblocked, as they do not have any cycle recovery nodes. The current thread (Thread A) will initiate unwinding with the cycle c
as the value. Unwinding will pass through QA3 and be caught by QA2. QA2 will substitute the recovery value and return normally. QA1 and QC3 will then complete normally and so forth, on up until all queries have completed.
Example 2: Recovery in two queries on the detecting thread
Consider the case where both query QA2 and QA3 have recovery set. It proceeds the same Example 1 until the the current initiates unwinding, as described in Example 1. When QA3 receives the cycle, it stores its recovery value and completes normally. QA2 then adds QA3 as an input dependency: at that point, QA2 observes that it too has the cycle mark set, and so it initiates unwinding. The rest of QA2 therefore never executes. This unwinding is caught by QA2's entry point and it stores the recovery value and returns normally. QA1 and QC3 then continue normally, as they have not had their cycle
flag set.
Example 3: Recovery on another thread
Now consider the case where only the query QB2 has recovery set. It and QB3 will be marked with the cycle c: Cycle
and thread B will be unblocked; the edge QB3 -> QC2
will be removed from the dependency graph. Thread A will then add an edge QA3 -> QB2
and block on thread B. At that point, thread A releases the lock on the dependency graph, and so thread B is re-awoken. It observes the WaitResult::Cycle
and initiates unwinding. Unwinding proceeds through QB3 and into QB2, which recovers. QB1 is then able to execute normally, as is QA3, and execution proceeds from there.
Example 4: Recovery on all queries
Now consider the case where all the queries have recovery set. In that case, they are all marked with the cycle, and all the cross-thread edges are removed from the graph. Each thread will independently awaken and initiate unwinding. Each query will recover.
Frequently asked questions
Why have other threads retry instead of giving them the value?
In the past, when one thread T1 blocked on some query Q being executed by another thread T2, we would create a custom channel between the threads. T2 would then send the result of Q directly to T1, and T1 had no need to retry. This mechanism was simplified in this RFC because we don't always have a value available: sometimes the cycle results when T2 is just verifying whether a memoized value is still valid. In that case, the value may not have been computed, and so when T1 retries it will in fact go on to compute the value. (Previously, this case was overlooked by the cycle handling logic and resulted in a panic.)
Why do we use unwinding to manage cycle recovery?
When a query Q participates in cycle recovery, we use unwinding to get from the point where the cycle is detected back to the query's execution function. This ensures that the rest of Q never runs. This is important because Q might otherwise go on to create new cycles even while recovery is proceeding. Consider an example like:
#![allow(unused)] fn main() { #[salsa::recovery] fn query_q1(db: &dyn Database) { db.query_q2() db.query_q3() // <-- this never runs, thanks to unwinding } #[salsa::recovery] fn query_q2(db: &dyn Database) { db.query_q1() } #[salsa::recovery] fn query_q3(db: &dyn Database) { db.query_q1() } }
Why not invoke the recovery functions all at once?
The code currently unwinds frame by frame and invokes recovery as it goes. Another option might be to invoke the recovery function for all participants in the cycle up-front. This would be fine, but it's a bit difficult to do, since the types for each cycle are different, and the Runtime
code doesn't know what they are. We also don't have access to the memoization tables and so forth.
Parallel friendly caching
Metadata
- Author: nikomatsakis
- Date: 2021-05-29
- Introduced in: (please update once you open your PR)
Summary
- Rework query storage to be based on concurrent hashmaps instead of slots with read-write locked state.
Motivation
Two-fold:
- Simpler, cleaner, and hopefully faster algorithm.
- Enables some future developments that are not part of this RFC:
- Derived queries whose keys are known to be integers.
- Fixed point cycles so that salsa and chalk can be deeply integrated.
- Non-synchronized queries that potentially execute on many threads in parallel (required for fixed point cycles, but potentially valuable in their own right).
User's guide
No user visible changes.
Reference guide
Background: Current structure
Before this RFC, the overall structure of derived queries is as follows:
- Each derived query has a
DerivedStorage<Q>
(stored in the database) that contains:- the
slot_map
, a monotonically growing, indexable map from keys (Q::Key
) to theSlot<Q>
for the given key - lru list
- the
- Each
Slot<Q>
has- r-w locked query-state that can be:
- not-computed
- in-progress with synchronization storage:
id
of the runtime computing the valueanyone_waiting
:AtomicBool
set to true if other threads are awaiting result
- a
Memo<Q>
- r-w locked query-state that can be:
- A
Memo<Q>
has- an optional value
Option<Q::Value>
- dependency information:
- verified-at
- changed-at
- durability
- input set (typically a
Arc<[DatabaseKeyIndex]>
)
- an optional value
Fetching the value for a query currently works as follows:
- Acquire the read lock on the (indexable)
slot_map
and hash key to find the slot.- If no slot exists, acquire write lock and insert.
- Acquire the slot's internal lock to perform the fetch operation.
Verifying a dependency uses a scheme introduced in RFC #6. Each dependency is represented as a DatabaseKeyIndex
which contains three indices (group, query, and key). The group and query indices are used to find the query storage via match
statements and then the next operation depends on the query type:
- Acquire the read lock on the (indexable)
slot_map
and use key index to load the slot. Read lock is released afterwards. - Acquire the slot's internal lock to perform the maybe changed after operation.
New structure (introduced by this RFC)
The overall structure of derived queries after this RFC is as follows:
- Each derived query has a
DerivedStorage<Q>
(stored in the database) that contains:- a set of concurrent hashmaps:
key_map
: maps fromQ::Key
to an internal key indexK
memo_map
: maps fromK
to cached memoArcSwap<Memo<Q>>
sync_map
: maps fromK
to aSync<Q>
synchronization value
- lru set
- a set of concurrent hashmaps:
- A
Memo<Q>
has- an immutable optional value
Option<Q::Value>
- dependency information:
- updatable verified-at (
AtomicCell<Option<Revision>>
) - immutable changed-at (
Revision
) - immutable durability (
Durability
) - immutable input set (typically a
Arc<[DatabaseKeyIndex]>
)
- updatable verified-at (
- information for LRU:
DatabaseKeyIndex
lru_index
, anAtomicUsize
- an immutable optional value
- A
Sync<Q>
hasid
of the runtime computing the valueanyone_waiting
:AtomicBool
set to true if other threads are awaiting result
Fetching the value for a derived query will work as follows:
- Find internal index
K
by hashing key, as today.- Precise operation for this will depend on the concurrent hashmap implementation.
- Load memo
M: Arc<Memo<Q>>
frommemo_map[K]
(if present):- If verified is
None
, then another thread has found this memo to be invalid; ignore it. - Else, let
Rv
be the "last verified revision". - If
Rv
is the current revision, or last change to an input with durabilityM.durability
was beforeRv
:- Update "last verified revision" and return memoized value.
- If verified is
- Atomically check
sync_map
for an existingSync<Q>
:- If one exists, block on the thread within and return to step 2 after it completes:
- If this results in a cycle, unwind as today.
- If none exists, insert a new entry with current runtime-id.
- If one exists, block on the thread within and return to step 2 after it completes:
- Check dependencies deeply
- Iterate over each dependency
D
and checkdb.maybe_changed_after(D, Rv)
.- If no dependency has changed, update
verified_at
to current revision and return memoized value.
- If no dependency has changed, update
- Mark memo as invalid by storing
None
in the verified-at.
- Iterate over each dependency
- Construct the new memo:
- Push query onto the local stack and execute the query function:
- If this query is found to be a cycle participant, execute recovery function.
- Backdate result if it is equal to the old memo's value.
- Allocate new memo.
- Push query onto the local stack and execute the query function:
- Store results:
- Store new memo into
memo_map[K]
. - Remove query from the
sync_map
.
- Store new memo into
- Return newly constructed value._
Verifying a dependency for a derived query will work as follows:
- Find internal index
K
by hashing key, as today.- Precise operation for this will depend on the concurrent hashmap implementation.
- Load memo
M: Arc<Memo<Q>>
frommemo_map[K]
(if present):- If verified is
None
, then another thread has found this memo to be invalid; ignore it. - Else, let
Rv
be the "last verified revision". - If
Rv
is the current revision, return true or false depending on whether changed-at from memo. - If last change to an input with durability
M.durability
was beforeRv
:- Update
verified_at
to current revision and return memoized value.
- Update
- Iterate over each dependency
D
and checkdb.maybe_changed_after(D, Rv)
.- If no dependency has changed, update
verified_at
to current revision and return memoized value.
- If no dependency has changed, update
- Mark memo as invalid by storing
None
in the verified-at.
- If verified is
- Atomically check
sync_map
for an existingSync<Q>
:- If one exists, block on the thread within and return to step 2 after it completes:
- If this results in a cycle, unwind as today.
- If none exists, insert a new entry with current runtime-id.
- If one exists, block on the thread within and return to step 2 after it completes:
- Construct the new memo:
- Push query onto the local stack and execute the query function:
- If this query is found to be a cycle participant, execute recovery function.
- Backdate result if it is equal to the old memo's value.
- Allocate new memo.
- Push query onto the local stack and execute the query function:
- Store results:
- Store new memo into
memo_map[K]
. - Remove query from the
sync_map
.
- Store new memo into
- Return true or false depending on whether memo was backdated.
Frequently asked questions
Why use ArcSwap
?
It's a relatively minor implementation detail, but the code in this PR uses ArcSwap
to store the values in the memo-map. In the case of a cache hit or other transient operations, this allows us to read from the arc while avoiding a full increment of the ref count. It adds a small bit of complexity because we have to be careful to do a full load before any recursive operations, since arc-swap only gives a fixed number of "guards" per thread before falling back to more expensive loads.
Do we really need maybe_changed_after
and fetch
?
Yes, we do. "maybe changed after" is very similar to "fetch", but it doesn't require that we have a memoized value. This is important for LRU.
The LRU map in the code is just a big lock!
That's not a question. But it's true, I simplified the LRU code to just use a mutex. My assumption is that there are relatively few LRU-ified queries, and their values are relatively expensive to compute, so this is ok. If we find it's a bottleneck, though, I believe we could improve it by using a similar "zone scheme" to what we use now. We would add a lru_index
to the Memo
so that we can easily check if the memo is in the "green zone" when reading (if so, no updates are needed). The complexity there is that when we produce a replacement memo, we have to install it and swap the index. Thinking about that made my brain hurt a little so I decided to just take the simple option for now.
How do the synchronized / atomic operations compare after this RFC?
After this RFC, to perform a read, in the best case:
- We do one "dashmap get" to map key to key index.
- We do another "dashmap get" from key index to memo.
- We do an "arcswap load" to get the memo.
- We do an "atomiccell read" to load the current revision or durability information.
dashmap is implemented with a striped set of read-write locks, so this is roughly the same (two read locks) as before this RFC. However:
- We no longer do any atomic ref count increments.
- It is theoretically possible to replace dashmap with something that doesn't use locks.
- The first dashmap get should be removable, if we know that the key is a 32 bit integer.
- I plan to propose this in a future RFC.
Yeah yeah, show me some benchmarks!
I didn't run any. I'll get on that.
Meta: about the book itself
Linking policy
We try to avoid links that easily become fragile.
Do:
- Link to
docs.rs
types to document the public API, but modify the link to uselatest
as the version. - Link to modules in the source code.
- Create "named anchors" and embed source code directly.
Don't:
- Link to direct lines on github, even within a specific commit, unless you are trying to reference a historical piece of code ("how things were at the time").