2020-01-12 22:59:16 -05:00
|
|
|
|
// Objects represented on ASG
|
|
|
|
|
//
|
2023-01-17 23:09:25 -05:00
|
|
|
|
// Copyright (C) 2014-2023 Ryan Specialty, LLC.
|
2020-03-06 11:05:18 -05:00
|
|
|
|
//
|
|
|
|
|
// This file is part of TAME.
|
2020-01-12 22:59:16 -05:00
|
|
|
|
//
|
|
|
|
|
// This program is free software: you can redistribute it and/or modify
|
|
|
|
|
// it under the terms of the GNU General Public License as published by
|
|
|
|
|
// the Free Software Foundation, either version 3 of the License, or
|
|
|
|
|
// (at your option) any later version.
|
|
|
|
|
//
|
|
|
|
|
// This program is distributed in the hope that it will be useful,
|
|
|
|
|
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
|
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
|
// GNU General Public License for more details.
|
|
|
|
|
//
|
|
|
|
|
// You should have received a copy of the GNU General Public License
|
|
|
|
|
// along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
|
|
|
|
|
|
//! Objects represented by the ASG.
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//!
|
|
|
|
|
//! Dynamic Object Types and Narrowing
|
|
|
|
|
//! ==================================
|
|
|
|
|
//! Unlike the functional lowering pipeline that precedes it,
|
|
|
|
|
//! the ASG is a mutable, ever-evolving graph of dynamic data.
|
|
|
|
|
//! The ASG does not benefit from the same type-level guarantees that the
|
|
|
|
|
//! rest of the system does at compile-time.
|
|
|
|
|
//!
|
2023-01-23 11:40:10 -05:00
|
|
|
|
//! However,
|
|
|
|
|
//! we _are_ able to utilize the type system to ensure statically that
|
|
|
|
|
//! there exists no code path that is able to generated an invalid graph
|
|
|
|
|
//! (a graph that does not adhere to its ontology as described below).
|
|
|
|
|
//!
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//! Any node on the graph can represent any type of [`Object`].
|
2022-12-22 14:24:40 -05:00
|
|
|
|
//! An [`ObjectIndex`] contains an index into the graph,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//! _not_ a reference;
|
2023-01-23 11:40:10 -05:00
|
|
|
|
//! it is therefore possible (though avoidable) for objects to be
|
|
|
|
|
//! modified out from underneath references.
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//! Consequently,
|
2022-12-22 14:24:40 -05:00
|
|
|
|
//! we cannot trust that an [`ObjectIndex`] is what we expect it to be when
|
2023-01-23 11:40:10 -05:00
|
|
|
|
//! performing an operation on the graph using that index,
|
|
|
|
|
//! though the system is designed to uphold an invariant that the _type_
|
|
|
|
|
//! of [`Object`] cannot be changed.
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//!
|
|
|
|
|
//! To perform an operation on a particular type of object,
|
|
|
|
|
//! we must first _narrow_ it.
|
|
|
|
|
//! Narrowing converts from the [`Object`] sum type into a more specific
|
|
|
|
|
//! inner type,
|
|
|
|
|
//! such as [`Ident`] or [`Expr`].
|
|
|
|
|
//! This operation _should_,
|
|
|
|
|
//! if the compiler is operating correctly,
|
|
|
|
|
//! always succeed,
|
|
|
|
|
//! because the type of object should always match our expectations;
|
|
|
|
|
//! the explicit narrowing is to ensure memory safety in case that
|
|
|
|
|
//! assumption does not hold.
|
2022-12-22 14:24:40 -05:00
|
|
|
|
//! To facilitate this in a convenient way,
|
|
|
|
|
//! operations returning an [`ObjectIndex`] will be associated with an
|
|
|
|
|
//! [`ObjectKind`] that will be used to automatically perform narrowing on
|
|
|
|
|
//! subsequent operations using that [`ObjectIndex`].
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//!
|
|
|
|
|
//! Since a type mismatch represents a bug in the compiler,
|
|
|
|
|
//! the API favors [`Result`]-free narrowing rather than burdening every
|
|
|
|
|
//! caller with additional complexity---we
|
|
|
|
|
//! will attempt to narrow and panic in the event of a failure,
|
|
|
|
|
//! including a diagnostic message that helps to track down the issue
|
|
|
|
|
//! using whatever [`Span`]s we have available.
|
2022-12-22 14:24:40 -05:00
|
|
|
|
//! [`ObjectIndex`] is associated with a span derived from the point of its
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
//! creation to handle this diagnostic situation automatically.
|
2023-01-23 11:40:10 -05:00
|
|
|
|
//!
|
|
|
|
|
//! Edge Types and Narrowing
|
|
|
|
|
//! ------------------------
|
|
|
|
|
//! Unlike nodes,
|
|
|
|
|
//! edges may reference [`Object`]s of many different types,
|
|
|
|
|
//! as defined by the graph's ontology.
|
|
|
|
|
//!
|
|
|
|
|
//! The set [`ObjectKind`] types that may be related _to_
|
|
|
|
|
//! (via edges)
|
|
|
|
|
//! from other objects are the variants of [`ObjectRelTy`].
|
|
|
|
|
//! Each such [`ObjectKind`] must implement [`ObjectRelatable`],
|
|
|
|
|
//! where [`ObjectRelatable::Rel`] is an enum whose variants represent a
|
|
|
|
|
//! _subset_ of [`Object`]'s variants that are valid targets for edges
|
|
|
|
|
//! from that object type.
|
|
|
|
|
//! If some [`ObjectKind`] `OA` is able to be related to another
|
|
|
|
|
//! [`ObjectKind`] `OB`,
|
|
|
|
|
//! then [`ObjectRelTo::<OB>`](ObjectRelTo) is implemented for `OA`.
|
|
|
|
|
//!
|
|
|
|
|
//! When querying the graph for edges using [`ObjectIndex::edges`],
|
|
|
|
|
//! the corresponding [`ObjectRelatable::Rel`] type is provided,
|
|
|
|
|
//! which may then be acted upon or filtered by the caller.
|
|
|
|
|
//! Unlike nodes,
|
|
|
|
|
//! it is difficult to statically expect exact edge types in most code
|
|
|
|
|
//! paths
|
|
|
|
|
//! (beyond the `Rel` object itself),
|
|
|
|
|
//! and so [`ObjectRel::narrow`] produces an [`Option`] of the inner
|
|
|
|
|
//! [`ObjectIndex`],
|
|
|
|
|
//! rather than panicing.
|
|
|
|
|
//! This `Option` is convenient to use with `Iterator::filter_map` to query
|
|
|
|
|
//! for specific edge types.
|
|
|
|
|
//!
|
|
|
|
|
//! Using [`ObjectRelTo`],
|
|
|
|
|
//! we are able to ensure statically that all code paths only add edges to
|
|
|
|
|
//! the [`Asg`] that adhere to the ontology described above;
|
|
|
|
|
//! it should therefore not be possible for an edge to exist on the
|
|
|
|
|
//! graph that is not represented by [`ObjectRelatable::Rel`],
|
|
|
|
|
//! provided that it is properly defined.
|
|
|
|
|
//! Since [`ObjectRel`] narrows into an [`ObjectIndex`],
|
|
|
|
|
//! the system will produce runtime panics if there is ever any attempt to
|
|
|
|
|
//! follow an edge to an unexpected [`ObjectKind`].
|
2020-01-12 22:59:16 -05:00
|
|
|
|
|
2023-01-17 22:58:41 -05:00
|
|
|
|
use super::Asg;
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
use crate::{
|
2023-01-10 15:06:24 -05:00
|
|
|
|
diagnose::{panic::DiagnosticPanic, Annotate, AnnotatedSpan},
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
diagnostic_panic,
|
tamer: f::Functor: New trait
This commit is purposefully coupled with changes that utilize it to
demonstrate that the need for this abstraction has been _derived_, not
forced; TAMER doesn't aim to be functional for the sake of it, since
idiomatic Rust achieves many of its benefits without the formalisms.
But, the formalisms do occasionally help, and this is one such
example. There is other existing code that can be refactored to take
advantage of this style as well.
I do _not_ wish to pull an existing functional dependency into TAMER; I want
to keep these abstractions light, and eliminate them as necessary, as Rust
continues to integrate new features into its core. I also want to be able
to modify the abstractions to suit our particular needs. (This is _not_ a
general recommendation; it's particular to TAMER and to my experience.)
This implementation of `Functor` is one such example. While it is modeled
after Haskell in that it provides `fmap`, the primitive here is instead
`map`, with `fmap` derived from it, since `map` allows for better use of
Rust idioms. Furthermore, it's polymorphic over _trait_ type parameters,
not method, allowing for separate trait impls for different container types,
which can in turn be inferred by Rust and allow for some very concise
mapping; this is particularly important for TAMER because of the disciplined
use of newtypes.
For example, `foo.overwrite(span)` and `foo.overwrite(name)` are both
self-documenting, and better alternatives than, say, `foo.map_span(|_|
span)` and `foo.map_symbol(|_| name)`; the latter are perfectly clear in
what they do, but lack a layer of abstraction, and are verbose. But the
clarity of the _new_ form does rely on either good naming conventions of
arguments, or explicit type annotations using turbofish notation if
necessary.
This will be implemented on core Rust types as appropriate and as
possible. At the time of writing, we do not yet have trait specialization,
and there's too many soundness issues for me to be comfortable enabling it,
so that limits that we can do with something like, say, a generic `Result`,
while also allowing for specialized implementations based on newtypes.
DEV-13160
2023-01-04 12:30:18 -05:00
|
|
|
|
f::Functor,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
span::{Span, UNKNOWN_SPAN},
|
|
|
|
|
};
|
|
|
|
|
use petgraph::graph::NodeIndex;
|
2023-01-10 15:06:24 -05:00
|
|
|
|
use std::{convert::Infallible, fmt::Display, marker::PhantomData};
|
2022-05-19 12:31:37 -04:00
|
|
|
|
|
2023-01-17 22:48:19 -05:00
|
|
|
|
pub mod expr;
|
|
|
|
|
pub mod ident;
|
2023-01-30 16:51:24 -05:00
|
|
|
|
pub mod pkg;
|
2023-01-31 22:00:51 -05:00
|
|
|
|
pub mod root;
|
2023-01-17 22:48:19 -05:00
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
pub use expr::Expr;
|
|
|
|
|
pub use ident::Ident;
|
|
|
|
|
pub use pkg::Pkg;
|
2023-01-31 22:00:51 -05:00
|
|
|
|
pub use root::Root;
|
2023-01-17 22:58:41 -05:00
|
|
|
|
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// An object on the ASG.
|
|
|
|
|
///
|
|
|
|
|
/// See the [module-level documentation](super) for more information.
|
2022-05-19 12:31:37 -04:00
|
|
|
|
#[derive(Debug, PartialEq)]
|
|
|
|
|
pub enum Object {
|
2022-05-19 12:48:43 -04:00
|
|
|
|
/// Represents the root of all reachable identifiers.
|
|
|
|
|
///
|
|
|
|
|
/// Any identifier not reachable from the root will not be linked into
|
|
|
|
|
/// the final executable.
|
|
|
|
|
///
|
|
|
|
|
/// There should be only one object of this kind.
|
2023-01-31 22:00:51 -05:00
|
|
|
|
Root(Root),
|
2022-05-19 12:48:43 -04:00
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
/// A package of identifiers.
|
|
|
|
|
Pkg(Pkg),
|
|
|
|
|
|
2022-05-19 12:48:43 -04:00
|
|
|
|
/// Identifier (a named object).
|
2022-05-19 12:31:37 -04:00
|
|
|
|
Ident(Ident),
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
|
|
|
|
|
/// Expression.
|
|
|
|
|
///
|
|
|
|
|
/// An expression may optionally be named by one or more [`Ident`]s.
|
|
|
|
|
Expr(Expr),
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 16:37:25 -05:00
|
|
|
|
/// Object types corresponding to variants in [`Object`].
|
2023-01-23 11:40:10 -05:00
|
|
|
|
///
|
|
|
|
|
/// These are used as small tags for [`ObjectRelatable`].
|
|
|
|
|
/// Rust unfortunately makes working with its internal tags difficult,
|
|
|
|
|
/// despite their efforts with [`std::mem::Discriminant`],
|
|
|
|
|
/// which requires a _value_ to produce.
|
|
|
|
|
///
|
|
|
|
|
/// TODO: `pub(super)` when the graph can be better encapsulated.
|
|
|
|
|
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
|
|
|
|
|
pub enum ObjectRelTy {
|
2023-01-31 16:37:25 -05:00
|
|
|
|
Root,
|
2023-01-30 16:51:24 -05:00
|
|
|
|
Pkg,
|
2023-01-23 11:40:10 -05:00
|
|
|
|
Ident,
|
|
|
|
|
Expr,
|
|
|
|
|
}
|
|
|
|
|
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
impl Display for Object {
|
|
|
|
|
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
|
|
|
|
|
match self {
|
2023-01-31 22:00:51 -05:00
|
|
|
|
Self::Root(_) => write!(f, "root ASG node"),
|
2023-01-30 16:51:24 -05:00
|
|
|
|
Self::Pkg(pkg) => Display::fmt(pkg, f),
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
Self::Ident(ident) => Display::fmt(ident, f),
|
|
|
|
|
Self::Expr(expr) => Display::fmt(expr, f),
|
|
|
|
|
}
|
|
|
|
|
}
|
2022-05-19 12:31:37 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl Object {
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
pub fn span(&self) -> Span {
|
|
|
|
|
match self {
|
2023-01-31 22:00:51 -05:00
|
|
|
|
Self::Root(_) => UNKNOWN_SPAN,
|
2023-01-30 16:51:24 -05:00
|
|
|
|
Self::Pkg(pkg) => pkg.span(),
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
Self::Ident(ident) => ident.span(),
|
|
|
|
|
Self::Expr(expr) => expr.span(),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-05-19 12:31:37 -04:00
|
|
|
|
/// Retrieve an [`Ident`] reference,
|
|
|
|
|
/// or [`None`] if the object is not an identifier.
|
|
|
|
|
pub fn as_ident_ref(&self) -> Option<&Ident> {
|
|
|
|
|
match self {
|
|
|
|
|
Self::Ident(ident) => Some(ident),
|
2022-05-19 12:48:43 -04:00
|
|
|
|
_ => None,
|
2022-05-19 12:31:37 -04:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Unwraps an object as an [`Ident`],
|
|
|
|
|
/// panicing if the object is of a different type.
|
|
|
|
|
///
|
|
|
|
|
/// This should be used only when a panic would represent an internal
|
|
|
|
|
/// error resulting from state inconsistency on the graph.
|
|
|
|
|
/// Ideally,
|
|
|
|
|
/// the graph would be typed in such a way to prevent this type of
|
|
|
|
|
/// thing from occurring in the future.
|
|
|
|
|
pub fn unwrap_ident(self) -> Ident {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
self.into()
|
2022-05-19 12:31:37 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Unwraps an object as an [`&Ident`](Ident),
|
|
|
|
|
/// panicing if the object is of a different type.
|
|
|
|
|
///
|
|
|
|
|
/// This should be used only when a panic would represent an internal
|
|
|
|
|
/// error resulting from state inconsistency on the graph.
|
|
|
|
|
/// Ideally,
|
|
|
|
|
/// the graph would be typed in such a way to prevent this type of
|
|
|
|
|
/// thing from occurring in the future.
|
|
|
|
|
pub fn unwrap_ident_ref(&self) -> &Ident {
|
|
|
|
|
match self {
|
|
|
|
|
Self::Ident(ident) => ident,
|
2022-05-19 12:48:43 -04:00
|
|
|
|
x => panic!("internal error: expected Ident, found {x:?}"),
|
2022-05-19 12:31:37 -04:00
|
|
|
|
}
|
|
|
|
|
}
|
2022-12-22 14:24:40 -05:00
|
|
|
|
|
|
|
|
|
/// Diagnostic panic after failing to narrow an object.
|
|
|
|
|
///
|
|
|
|
|
/// This is an internal method.
|
|
|
|
|
/// `expected` should contain "a"/"an".
|
2022-12-22 16:32:21 -05:00
|
|
|
|
fn narrowing_panic(&self, expected: &str) -> ! {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
diagnostic_panic!(
|
|
|
|
|
self.span()
|
|
|
|
|
.internal_error(format!(
|
|
|
|
|
"expected this object to be {expected}"
|
|
|
|
|
))
|
|
|
|
|
.into(),
|
|
|
|
|
"expected {expected}, found {self}",
|
|
|
|
|
)
|
|
|
|
|
}
|
2022-05-19 12:31:37 -04:00
|
|
|
|
}
|
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
impl From<&Object> for Span {
|
|
|
|
|
fn from(val: &Object) -> Self {
|
|
|
|
|
val.span()
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 22:00:51 -05:00
|
|
|
|
impl From<Root> for Object {
|
|
|
|
|
fn from(root: Root) -> Self {
|
|
|
|
|
Self::Root(root)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
impl From<Pkg> for Object {
|
|
|
|
|
fn from(pkg: Pkg) -> Self {
|
|
|
|
|
Self::Pkg(pkg)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-05-19 12:31:37 -04:00
|
|
|
|
impl From<Ident> for Object {
|
|
|
|
|
fn from(ident: Ident) -> Self {
|
|
|
|
|
Self::Ident(ident)
|
|
|
|
|
}
|
|
|
|
|
}
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
|
|
|
|
|
impl From<Expr> for Object {
|
|
|
|
|
fn from(expr: Expr) -> Self {
|
|
|
|
|
Self::Expr(expr)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 22:00:51 -05:00
|
|
|
|
impl From<Object> for Root {
|
|
|
|
|
/// Narrow an object into an [`Root`],
|
|
|
|
|
/// panicing if the object is not of that type.
|
|
|
|
|
fn from(val: Object) -> Self {
|
|
|
|
|
match val {
|
|
|
|
|
Object::Root(root) => root,
|
|
|
|
|
_ => val.narrowing_panic("the root"),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
impl From<Object> for Pkg {
|
|
|
|
|
/// Narrow an object into an [`Ident`],
|
|
|
|
|
/// panicing if the object is not of that type.
|
|
|
|
|
fn from(val: Object) -> Self {
|
|
|
|
|
match val {
|
|
|
|
|
Object::Pkg(pkg) => pkg,
|
|
|
|
|
_ => val.narrowing_panic("a package"),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
tamer: Integrate clippy
This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
2023-01-12 10:46:48 -05:00
|
|
|
|
impl From<Object> for Ident {
|
2022-12-22 16:32:21 -05:00
|
|
|
|
/// Narrow an object into an [`Ident`],
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// panicing if the object is not of that type.
|
tamer: Integrate clippy
This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
2023-01-12 10:46:48 -05:00
|
|
|
|
fn from(val: Object) -> Self {
|
|
|
|
|
match val {
|
|
|
|
|
Object::Ident(ident) => ident,
|
|
|
|
|
_ => val.narrowing_panic("an identifier"),
|
2022-12-22 14:24:40 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
tamer: Integrate clippy
This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
2023-01-12 10:46:48 -05:00
|
|
|
|
impl From<Object> for Expr {
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// Narrow an object into an [`Expr`],
|
|
|
|
|
/// panicing if the object is not of that type.
|
tamer: Integrate clippy
This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
2023-01-12 10:46:48 -05:00
|
|
|
|
fn from(val: Object) -> Self {
|
|
|
|
|
match val {
|
|
|
|
|
Object::Expr(expr) => expr,
|
|
|
|
|
_ => val.narrowing_panic("an expression"),
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-20 10:45:10 -05:00
|
|
|
|
impl AsRef<Object> for Object {
|
|
|
|
|
fn as_ref(&self) -> &Object {
|
|
|
|
|
self
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 22:00:51 -05:00
|
|
|
|
impl AsRef<Root> for Object {
|
|
|
|
|
fn as_ref(&self) -> &Root {
|
|
|
|
|
match self {
|
|
|
|
|
Object::Root(ref root) => root,
|
|
|
|
|
_ => self.narrowing_panic("the root"),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
impl AsRef<Pkg> for Object {
|
|
|
|
|
fn as_ref(&self) -> &Pkg {
|
|
|
|
|
match self {
|
|
|
|
|
Object::Pkg(ref pkg) => pkg,
|
|
|
|
|
_ => self.narrowing_panic("a package"),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-20 10:45:10 -05:00
|
|
|
|
impl AsRef<Ident> for Object {
|
|
|
|
|
fn as_ref(&self) -> &Ident {
|
|
|
|
|
match self {
|
|
|
|
|
Object::Ident(ref ident) => ident,
|
|
|
|
|
_ => self.narrowing_panic("an identifier"),
|
2022-12-22 16:32:21 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-20 10:45:10 -05:00
|
|
|
|
impl AsRef<Expr> for Object {
|
|
|
|
|
fn as_ref(&self) -> &Expr {
|
|
|
|
|
match self {
|
|
|
|
|
Object::Expr(ref expr) => expr,
|
|
|
|
|
_ => self.narrowing_panic("an expression"),
|
2022-12-22 16:32:21 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// An [`Object`]-compatbile entity.
|
|
|
|
|
///
|
|
|
|
|
/// See [`ObjectIndex`] for more information.
|
|
|
|
|
/// This type simply must be convertable both to and from [`Object`] so that
|
|
|
|
|
/// operations on the graph that retrieve its value can narrow into it,
|
|
|
|
|
/// and operations writing it back can expand it back into [`Object`].
|
|
|
|
|
///
|
|
|
|
|
/// Note that [`Object`] is also an [`ObjectKind`],
|
|
|
|
|
/// if you do not desire narrowing.
|
2023-01-20 10:45:10 -05:00
|
|
|
|
pub trait ObjectKind = Into<Object> where Object: Into<Self> + AsRef<Self>;
|
2022-12-22 14:24:40 -05:00
|
|
|
|
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// Index representing an [`Object`] stored on the [`Asg`](super::Asg).
|
|
|
|
|
///
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// Object references are integer offsets,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// not pointers.
|
|
|
|
|
/// See the [module-level documentation][self] for more information.
|
|
|
|
|
///
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// The associated [`ObjectKind`] states an _expectation_ that,
|
|
|
|
|
/// when this [`ObjectIndex`] is used to perform an operation on the ASG,
|
|
|
|
|
/// that it will operate on an object of type `O`.
|
|
|
|
|
/// This type will be verified at runtime during any graph operation,
|
|
|
|
|
/// resulting in a panic if the expectation is not met;
|
|
|
|
|
/// see the [module-level documentation][self] for more information.
|
|
|
|
|
///
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// This object is associated with a [`Span`] that identifies the source
|
|
|
|
|
/// location from which this object was derived;
|
|
|
|
|
/// this is intended to be used to provide diagnostic information in the
|
|
|
|
|
/// event that the object somehow becomes unavailable for later
|
|
|
|
|
/// operations.
|
|
|
|
|
///
|
|
|
|
|
/// _The span is not accounted for in [`PartialEq`]_,
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// since it represents the context in which the [`ObjectIndex`] was
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// retrieved,
|
|
|
|
|
/// and the span associated with the underlying [`Object`] may evolve
|
|
|
|
|
/// over time.
|
2022-12-22 14:24:40 -05:00
|
|
|
|
#[derive(Debug)]
|
|
|
|
|
pub struct ObjectIndex<O: ObjectKind>(NodeIndex, Span, PhantomData<O>);
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
// Deriving this trait seems to silently fail at the time of writing
|
|
|
|
|
// (2022-12-22, Rust 1.68.0-nightly).
|
|
|
|
|
impl<O: ObjectKind> Clone for ObjectIndex<O> {
|
|
|
|
|
fn clone(&self) -> Self {
|
tamer: Integrate clippy
This invokes clippy as part of `make check` now, which I had previously
avoided doing (I'll elaborate on that below).
This commit represents the changes needed to resolve all the warnings
presented by clippy. Many changes have been made where I find the lints to
be useful and agreeable, but there are a number of lints, rationalized in
`src/lib.rs`, where I found the lints to be disagreeable. I have provided
rationale, primarily for those wondering why I desire to deviate from the
default lints, though it does feel backward to rationalize why certain lints
ought to be applied (the reverse should be true).
With that said, this did catch some legitimage issues, and it was also
helpful in getting some older code up-to-date with new language additions
that perhaps I used in new code but hadn't gone back and updated old code
for. My goal was to get clippy working without errors so that, in the
future, when others get into TAMER and are still getting used to Rust,
clippy is able to help guide them in the right direction.
One of the reasons I went without clippy for so long (though I admittedly
forgot I wasn't using it for a period of time) was because there were a
number of suggestions that I found disagreeable, and I didn't take the time
to go through them and determine what I wanted to follow. Furthermore, it
was hard to make that judgment when I was new to the language and lacked
the necessary experience to do so.
One thing I would like to comment further on is the use of `format!` with
`expect`, which is also what the diagnostic system convenience methods
do (which clippy does not cover). Because of all the work I've done trying
to understand Rust and looking at disassemblies and seeing what it
optimizes, I falsely assumed that Rust would convert such things into
conditionals in my otherwise-pure code...but apparently that's not the case,
when `format!` is involved.
I noticed that, after making the suggested fix with `get_ident`, Rust
proceeded to then inline it into each call site and then apply further
optimizations. It was also previously invoking the thread lock (for the
interner) unconditionally and invoking the `Display` implementation. That
is not at all what I intended for, despite knowing the eager semantics of
function calls in Rust.
Anyway, possibly more to come on that, I'm just tired of typing and need to
move on. I'll be returning to investigate further diagnostic messages soon.
2023-01-12 10:46:48 -05:00
|
|
|
|
Self(self.0, self.1, self.2)
|
2022-12-22 14:24:40 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl<O: ObjectKind> Copy for ObjectIndex<O> {}
|
|
|
|
|
|
|
|
|
|
impl<O: ObjectKind> ObjectIndex<O> {
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
pub fn new<S: Into<Span>>(index: NodeIndex, span: S) -> Self {
|
|
|
|
|
Self(index, span.into(), PhantomData::default())
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-30 16:51:24 -05:00
|
|
|
|
/// The source location from which the request for the associated object
|
|
|
|
|
/// was derived.
|
|
|
|
|
///
|
|
|
|
|
/// The span _does not necessarily represent_ the span of the target
|
|
|
|
|
/// [`Object`].
|
|
|
|
|
/// If the object is being created,
|
|
|
|
|
/// then it may,
|
|
|
|
|
/// but it otherwise represents the location of whatever is
|
|
|
|
|
/// _requesting_ the object.
|
|
|
|
|
pub fn span(&self) -> Span {
|
|
|
|
|
match self {
|
|
|
|
|
Self(_, span, _) => *span,
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
/// Add an edge from `self` to `to_oi` on the provided [`Asg`].
|
|
|
|
|
///
|
|
|
|
|
/// An edge can only be added if ontologically valid;
|
|
|
|
|
/// see [`ObjectRelTo`] for more information.
|
|
|
|
|
///
|
|
|
|
|
/// See also [`Self::add_edge_to`].
|
|
|
|
|
pub fn add_edge_to<OB: ObjectKind>(
|
|
|
|
|
self,
|
|
|
|
|
asg: &mut Asg,
|
|
|
|
|
to_oi: ObjectIndex<OB>,
|
|
|
|
|
) -> Self
|
|
|
|
|
where
|
|
|
|
|
O: ObjectRelTo<OB>,
|
|
|
|
|
{
|
|
|
|
|
asg.add_edge(self, to_oi);
|
|
|
|
|
self
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Add an edge from `from_oi` to `self` on the provided [`Asg`].
|
|
|
|
|
///
|
|
|
|
|
/// An edge can only be added if ontologically valid;
|
|
|
|
|
/// see [`ObjectRelTo`] for more information.
|
|
|
|
|
///
|
|
|
|
|
/// See also [`Self::add_edge_to`].
|
|
|
|
|
pub fn add_edge_from<OB: ObjectKind>(
|
|
|
|
|
self,
|
|
|
|
|
asg: &mut Asg,
|
|
|
|
|
from_oi: ObjectIndex<OB>,
|
|
|
|
|
) -> Self
|
|
|
|
|
where
|
|
|
|
|
OB: ObjectRelTo<O>,
|
|
|
|
|
{
|
|
|
|
|
from_oi.add_edge_to(asg, self);
|
|
|
|
|
self
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Create an iterator over the [`ObjectIndex`]es of the outgoing edges
|
|
|
|
|
/// of `self`.
|
|
|
|
|
///
|
|
|
|
|
/// Note that the [`ObjectKind`] `OB` indicates what type of
|
|
|
|
|
/// [`ObjectIndex`]es will be yielded by the returned iterator;
|
|
|
|
|
/// this method does nothing to filter non-matches.
|
2023-01-23 11:40:10 -05:00
|
|
|
|
pub fn edges<'a>(
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
self,
|
|
|
|
|
asg: &'a Asg,
|
2023-01-23 11:40:10 -05:00
|
|
|
|
) -> impl Iterator<Item = <O as ObjectRelatable>::Rel> + 'a
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
where
|
2023-01-23 11:40:10 -05:00
|
|
|
|
O: ObjectRelatable + 'a,
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
{
|
2023-01-23 11:40:10 -05:00
|
|
|
|
asg.edges(self)
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 22:00:51 -05:00
|
|
|
|
/// Iterate over the [`ObjectIndex`]es of the outgoing edges of `self`
|
|
|
|
|
/// that match the [`ObjectKind`] `OB`.
|
|
|
|
|
///
|
|
|
|
|
/// This is simply a shorthand for applying [`ObjectRel::narrow`] via
|
|
|
|
|
/// [`Iterator::filter_map`].
|
|
|
|
|
pub fn edges_filtered<'a, OB: ObjectKind + ObjectRelatable + 'a>(
|
|
|
|
|
self,
|
|
|
|
|
asg: &'a Asg,
|
|
|
|
|
) -> impl Iterator<Item = ObjectIndex<OB>> + 'a
|
|
|
|
|
where
|
|
|
|
|
O: ObjectRelTo<OB> + 'a,
|
|
|
|
|
{
|
|
|
|
|
self.edges(asg).filter_map(ObjectRel::narrow::<OB>)
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 16:37:25 -05:00
|
|
|
|
/// Incoming edges to self filtered by [`ObjectKind`] `OI`.
|
|
|
|
|
///
|
|
|
|
|
/// For filtering rationale,
|
|
|
|
|
/// see [`Asg::incoming_edges_filtered`].
|
|
|
|
|
fn incoming_edges_filtered<'a, OI: ObjectKind + ObjectRelatable + 'a>(
|
|
|
|
|
self,
|
|
|
|
|
asg: &'a Asg,
|
|
|
|
|
) -> impl Iterator<Item = ObjectIndex<OI>> + 'a
|
|
|
|
|
where
|
|
|
|
|
O: ObjectRelFrom<OI> + 'a,
|
|
|
|
|
{
|
|
|
|
|
asg.incoming_edges_filtered(self)
|
|
|
|
|
}
|
|
|
|
|
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
/// Resolve `self` to the object that it references.
|
|
|
|
|
///
|
|
|
|
|
/// Panics
|
|
|
|
|
/// ======
|
|
|
|
|
/// If our [`ObjectKind`] `O` does not match the actual type of the
|
|
|
|
|
/// object on the graph,
|
|
|
|
|
/// the system will panic.
|
|
|
|
|
pub fn resolve(self, asg: &Asg) -> &O {
|
|
|
|
|
asg.expect_obj(self)
|
|
|
|
|
}
|
2023-01-17 16:31:13 -05:00
|
|
|
|
|
2023-01-23 11:40:10 -05:00
|
|
|
|
/// Curried [`Self::resolve`].
|
|
|
|
|
pub fn cresolve<'a>(asg: &'a Asg) -> impl FnMut(Self) -> &'a O {
|
|
|
|
|
move |oi| oi.resolve(asg)
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-17 16:31:13 -05:00
|
|
|
|
/// Resolve the identifier and map over the resulting [`Object`]
|
|
|
|
|
/// narrowed to [`ObjectKind`] `O`,
|
|
|
|
|
/// replacing the object on the given [`Asg`].
|
|
|
|
|
///
|
|
|
|
|
/// While the provided map may be pure,
|
|
|
|
|
/// this does mutate the provided [`Asg`].
|
|
|
|
|
///
|
|
|
|
|
/// If the operation fails,
|
|
|
|
|
/// `f` is expected to provide an object
|
|
|
|
|
/// (such as the original)
|
|
|
|
|
/// to return to the graph.
|
|
|
|
|
///
|
|
|
|
|
/// If this operation is [`Infallible`],
|
|
|
|
|
/// see [`Self::map_obj`].
|
|
|
|
|
pub fn try_map_obj<E>(
|
|
|
|
|
self,
|
|
|
|
|
asg: &mut Asg,
|
|
|
|
|
f: impl FnOnce(O) -> Result<O, (O, E)>,
|
|
|
|
|
) -> Result<Self, E> {
|
|
|
|
|
asg.try_map_obj(self, f)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Resolve the identifier and infallibly map over the resulting
|
|
|
|
|
/// [`Object`] narrowed to [`ObjectKind`] `O`,
|
|
|
|
|
/// replacing the object on the given [`Asg`].
|
|
|
|
|
///
|
|
|
|
|
/// If this operation is _not_ [`Infallible`],
|
|
|
|
|
/// see [`Self::try_map_obj`].
|
|
|
|
|
pub fn map_obj(self, asg: &mut Asg, f: impl FnOnce(O) -> O) -> Self {
|
|
|
|
|
// This verbose notation (in place of e.g. `unwrap`) is intentional
|
|
|
|
|
// to emphasize why it's unreachable and to verify our assumptions
|
|
|
|
|
// at every point.
|
|
|
|
|
match self.try_map_obj::<Infallible>(asg, |o| Ok(f(o))) {
|
|
|
|
|
Ok(oi) => oi,
|
|
|
|
|
Err::<_, Infallible>(_) => unreachable!(),
|
|
|
|
|
}
|
|
|
|
|
}
|
2023-01-23 11:40:10 -05:00
|
|
|
|
|
|
|
|
|
/// Lift [`Self`] into [`Option`] and [`filter`](Option::filter) based
|
|
|
|
|
/// on whether the [`ObjectRelatable::rel_ty`] of [`Self`]'s `O`
|
|
|
|
|
/// matches that of `OB`.
|
|
|
|
|
///
|
|
|
|
|
/// More intuitively:
|
|
|
|
|
/// if `OB` is the same [`ObjectKind`] associated with [`Self`],
|
|
|
|
|
/// return [`Some(Self)`](Some).
|
|
|
|
|
/// Otherwise,
|
|
|
|
|
/// return [`None`].
|
|
|
|
|
fn filter_rel<OB: ObjectKind + ObjectRelatable>(
|
|
|
|
|
self,
|
|
|
|
|
) -> Option<ObjectIndex<OB>>
|
|
|
|
|
where
|
|
|
|
|
O: ObjectRelatable,
|
|
|
|
|
{
|
|
|
|
|
let Self(index, span, _pd) = self;
|
|
|
|
|
|
|
|
|
|
// Rust doesn't know that `OB` and `O` will be the same,
|
|
|
|
|
// but this will be the case.
|
|
|
|
|
// If it weren't,
|
|
|
|
|
// then [`ObjectIndex`] protects us at runtime,
|
|
|
|
|
// so there are no safety issues here.
|
|
|
|
|
Some(ObjectIndex::<OB>(index, span, PhantomData::default()))
|
|
|
|
|
.filter(|_| O::rel_ty() == OB::rel_ty())
|
|
|
|
|
}
|
2023-01-31 22:00:51 -05:00
|
|
|
|
|
|
|
|
|
/// Root this object in the ASG.
|
|
|
|
|
///
|
|
|
|
|
/// A rooted object is forced to be reachable.
|
|
|
|
|
/// This should only be utilized when necessary for toplevel objects;
|
|
|
|
|
/// other objects should be reachable via their relations to other
|
|
|
|
|
/// objects.
|
|
|
|
|
/// Forcing objects to be reachable can prevent them from being
|
|
|
|
|
/// optimized away if they are not used.
|
|
|
|
|
pub fn root(self, asg: &mut Asg) -> Self
|
|
|
|
|
where
|
|
|
|
|
Root: ObjectRelTo<O>,
|
|
|
|
|
{
|
|
|
|
|
asg.root(self.span()).add_edge_to(asg, self);
|
|
|
|
|
self
|
|
|
|
|
}
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl ObjectIndex<Object> {
|
|
|
|
|
/// Indicate that the [`Object`] referenced by this index must be
|
|
|
|
|
/// narrowed into [`ObjectKind`] `O` when resolved.
|
|
|
|
|
///
|
|
|
|
|
/// This simply narrows the expected [`ObjectKind`].
|
|
|
|
|
pub fn must_narrow_into<O: ObjectKind>(self) -> ObjectIndex<O> {
|
|
|
|
|
match self {
|
|
|
|
|
Self(index, span, _) => ObjectIndex::new(index, span),
|
|
|
|
|
}
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
tamer: f::Functor: New trait
This commit is purposefully coupled with changes that utilize it to
demonstrate that the need for this abstraction has been _derived_, not
forced; TAMER doesn't aim to be functional for the sake of it, since
idiomatic Rust achieves many of its benefits without the formalisms.
But, the formalisms do occasionally help, and this is one such
example. There is other existing code that can be refactored to take
advantage of this style as well.
I do _not_ wish to pull an existing functional dependency into TAMER; I want
to keep these abstractions light, and eliminate them as necessary, as Rust
continues to integrate new features into its core. I also want to be able
to modify the abstractions to suit our particular needs. (This is _not_ a
general recommendation; it's particular to TAMER and to my experience.)
This implementation of `Functor` is one such example. While it is modeled
after Haskell in that it provides `fmap`, the primitive here is instead
`map`, with `fmap` derived from it, since `map` allows for better use of
Rust idioms. Furthermore, it's polymorphic over _trait_ type parameters,
not method, allowing for separate trait impls for different container types,
which can in turn be inferred by Rust and allow for some very concise
mapping; this is particularly important for TAMER because of the disciplined
use of newtypes.
For example, `foo.overwrite(span)` and `foo.overwrite(name)` are both
self-documenting, and better alternatives than, say, `foo.map_span(|_|
span)` and `foo.map_symbol(|_| name)`; the latter are perfectly clear in
what they do, but lack a layer of abstraction, and are verbose. But the
clarity of the _new_ form does rely on either good naming conventions of
arguments, or explicit type annotations using turbofish notation if
necessary.
This will be implemented on core Rust types as appropriate and as
possible. At the time of writing, we do not yet have trait specialization,
and there's too many soundness issues for me to be comfortable enabling it,
so that limits that we can do with something like, say, a generic `Result`,
while also allowing for specialized implementations based on newtypes.
DEV-13160
2023-01-04 12:30:18 -05:00
|
|
|
|
}
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
|
tamer: f::Functor: New trait
This commit is purposefully coupled with changes that utilize it to
demonstrate that the need for this abstraction has been _derived_, not
forced; TAMER doesn't aim to be functional for the sake of it, since
idiomatic Rust achieves many of its benefits without the formalisms.
But, the formalisms do occasionally help, and this is one such
example. There is other existing code that can be refactored to take
advantage of this style as well.
I do _not_ wish to pull an existing functional dependency into TAMER; I want
to keep these abstractions light, and eliminate them as necessary, as Rust
continues to integrate new features into its core. I also want to be able
to modify the abstractions to suit our particular needs. (This is _not_ a
general recommendation; it's particular to TAMER and to my experience.)
This implementation of `Functor` is one such example. While it is modeled
after Haskell in that it provides `fmap`, the primitive here is instead
`map`, with `fmap` derived from it, since `map` allows for better use of
Rust idioms. Furthermore, it's polymorphic over _trait_ type parameters,
not method, allowing for separate trait impls for different container types,
which can in turn be inferred by Rust and allow for some very concise
mapping; this is particularly important for TAMER because of the disciplined
use of newtypes.
For example, `foo.overwrite(span)` and `foo.overwrite(name)` are both
self-documenting, and better alternatives than, say, `foo.map_span(|_|
span)` and `foo.map_symbol(|_| name)`; the latter are perfectly clear in
what they do, but lack a layer of abstraction, and are verbose. But the
clarity of the _new_ form does rely on either good naming conventions of
arguments, or explicit type annotations using turbofish notation if
necessary.
This will be implemented on core Rust types as appropriate and as
possible. At the time of writing, we do not yet have trait specialization,
and there's too many soundness issues for me to be comfortable enabling it,
so that limits that we can do with something like, say, a generic `Result`,
while also allowing for specialized implementations based on newtypes.
DEV-13160
2023-01-04 12:30:18 -05:00
|
|
|
|
impl<O: ObjectKind> Functor<Span> for ObjectIndex<O> {
|
|
|
|
|
fn map(self, f: impl FnOnce(Span) -> Span) -> Self {
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
match self {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
Self(index, span, ph) => Self(index, f(span), ph),
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
impl<O: ObjectKind> PartialEq for ObjectIndex<O> {
|
|
|
|
|
/// Compare two [`ObjectIndex`]s' indices,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// without concern for its associated [`Span`].
|
|
|
|
|
///
|
2022-12-22 14:24:40 -05:00
|
|
|
|
/// See [`ObjectIndex`] for more information on why the span is not
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
/// accounted for in this comparison.
|
|
|
|
|
fn eq(&self, other: &Self) -> bool {
|
|
|
|
|
match (self, other) {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
(Self(index_a, _, _), Self(index_b, _, _)) => index_a == index_b,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
impl<O: ObjectKind> Eq for ObjectIndex<O> {}
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
impl<O: ObjectKind> From<ObjectIndex<O>> for NodeIndex {
|
|
|
|
|
fn from(objref: ObjectIndex<O>) -> Self {
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
match objref {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
ObjectIndex(index, _, _) => index,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-12-22 14:24:40 -05:00
|
|
|
|
impl<O: ObjectKind> From<ObjectIndex<O>> for Span {
|
|
|
|
|
fn from(value: ObjectIndex<O>) -> Self {
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
match value {
|
2022-12-22 14:24:40 -05:00
|
|
|
|
ObjectIndex(_, span, _) => span,
|
tamer: Initial concept for AIR/ASG Expr
This begins to place expressions on the graph---something that I've been
thinking about for a couple of years now, so it's interesting to finally be
doing it.
This is going to evolve; I want to get some things committed so that it's
clear how I'm moving forward. The ASG makes things a bit awkward for a
number of reasons:
1. I'm dealing with older code where I had a different model of doing
things;
2. It's mutable, rather than the mostly-functional lowering pipeline;
3. We're dealing with an aggregate ever-evolving blob of data (the graph)
rather than a stream of tokens; and
4. We don't have as many type guarantees.
I've shown with the lowering pipeline that I'm able to take a mutable
reference and convert it into something that's both functional and
performant, where I remove it from its container (an `Option`), create a new
version of it, and place it back. Rust is able to optimize away the memcpys
and such and just directly manipulate the underlying value, which is often a
register with all of the inlining.
_But_ this is a different scenario now. The lowering pipeline has a narrow
context. The graph has to keep hitting memory. So we'll see how this
goes. But it's most important to get this working and measure how it
performs; I'm not trying to prematurely optimize. My attempts right now are
for the way that I wish to develop.
Speaking to #4 above, it also sucks that I'm not able to type the
relationships between nodes on the graph. Rather, it's not that I _can't_,
but a project to created a typed graph library is beyond the scope of this
work and would take far too much time. I'll leave that to a personal,
non-work project. Instead, I'm going to have to narrow the type any time
the graph is accessed. And while that sucks, I'm going to do my best to
encapsulate those details to make it as seamless as possible API-wise. The
performance hit of performing the narrowing I'm hoping will be very small
relative to all the business logic going on (a single cache miss is bound to
be far more expensive than many narrowings which are just integer
comparisons and branching)...but we'll see. Introducing branching sucks,
but branch prediction is pretty damn good in modern CPUs.
DEV-13160
2022-12-21 16:47:04 -05:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2023-01-10 15:06:24 -05:00
|
|
|
|
|
tamer: asg: Add expression edges
This introduces a number of abstractions, whose concepts are not fully
documented yet since I want to see how it evolves in practice first.
This introduces the concept of edge ontology (similar to a schema) using the
type system. Even though we are not able to determine what the graph will
look like statically---since that's determined by data fed to us at
runtime---we _can_ ensure that the code _producing_ the graph from those
data will produce a graph that adheres to its ontology.
Because of the typed `ObjectIndex`, we're also able to implement operations
that are specific to the type of object that we're operating on. Though,
since the type is not (yet?) stored on the edge itself, it is possible to
walk the graph without looking at node weights (the `ObjectContainer`) and
therefore avoid panics for invalid type assumptions, which is bad, but I
don't think that'll happen in practice, since we'll want to be resolving
nodes at some point. But I'll addres that more in the future.
Another thing to note is that walking edges is only done in tests right now,
and so there's no filtering or anything; once there are nodes (if there are
nodes) that allow for different outgoing edge types, we'll almost certainly
want filtering as well, rather than panicing. We'll also want to be able to
query for any object type, but filter only to what's permitted by the
ontology.
DEV-13160
2023-01-11 15:49:37 -05:00
|
|
|
|
/// Indicate that an [`ObjectKind`] `Self` can be related to
|
|
|
|
|
/// [`ObjectKind`] `OB` by creating an edge from `Self` to `OB`.
|
|
|
|
|
///
|
|
|
|
|
/// This trait defines a portion of the graph ontology,
|
|
|
|
|
/// allowing [`Self`] to be related to `OB` by creating a directed edge
|
|
|
|
|
/// from [`Self`] _to_ `OB`, as in:
|
|
|
|
|
///
|
|
|
|
|
/// ```text
|
|
|
|
|
/// (Self) -> (OB)
|
|
|
|
|
/// ```
|
|
|
|
|
///
|
|
|
|
|
/// While the data on the graph itself is dynamic and provided at runtime,
|
|
|
|
|
/// the systems that _construct_ the graph using the runtime data can be
|
|
|
|
|
/// statically analyzed by the type system to ensure that they only
|
|
|
|
|
/// construct graphs that adhere to this schema.
|
2023-01-23 11:40:10 -05:00
|
|
|
|
pub trait ObjectRelTo<OB: ObjectKind + ObjectRelatable> =
|
|
|
|
|
ObjectRelatable where <Self as ObjectRelatable>::Rel: From<ObjectIndex<OB>>;
|
|
|
|
|
|
2023-01-31 16:37:25 -05:00
|
|
|
|
pub(super) trait ObjectRelFrom<OA: ObjectKind + ObjectRelatable> =
|
|
|
|
|
ObjectRelatable where <OA as ObjectRelatable>::Rel: From<ObjectIndex<Self>>;
|
|
|
|
|
|
2023-01-23 11:40:10 -05:00
|
|
|
|
/// Identify [`Self::Rel`] as a sum type consisting of the subset of
|
|
|
|
|
/// [`Object`] variants representing the valid _target_ edges of
|
|
|
|
|
/// [`Self`].
|
|
|
|
|
///
|
|
|
|
|
/// This is used to derive [`ObjectRelTo``],
|
|
|
|
|
/// which can be used as a trait bound to assert a valid relationship
|
|
|
|
|
/// between two [`Object`]s.
|
|
|
|
|
pub trait ObjectRelatable: ObjectKind {
|
|
|
|
|
/// Sum type representing a subset of [`Object`] variants that are valid
|
|
|
|
|
/// targets for edges from [`Self`].
|
|
|
|
|
///
|
|
|
|
|
/// See [`ObjectRel`] for more information.
|
|
|
|
|
type Rel: ObjectRel;
|
|
|
|
|
|
|
|
|
|
/// The [`ObjectRelTy`] tag used to identify this [`ObjectKind`] as a
|
|
|
|
|
/// target of a relation.
|
|
|
|
|
fn rel_ty() -> ObjectRelTy;
|
|
|
|
|
|
|
|
|
|
/// Represent a relation to another [`ObjectKind`] that cannot be
|
|
|
|
|
/// statically known and must be handled at runtime.
|
|
|
|
|
///
|
2023-01-30 11:27:40 -05:00
|
|
|
|
/// A value of [`None`] means that the provided [`ObjectRelTy`] is not
|
|
|
|
|
/// valid for [`Self`].
|
|
|
|
|
/// If the caller is utilizing edge data that is already present on the graph,
|
|
|
|
|
/// then this means that the system is not properly upholding edge
|
|
|
|
|
/// invariants
|
|
|
|
|
/// (the graph's ontology)
|
|
|
|
|
/// and the system ought to panic;
|
|
|
|
|
/// this is a significant bug representing a problem with the
|
|
|
|
|
/// correctness of the system.
|
|
|
|
|
///
|
2023-01-23 11:40:10 -05:00
|
|
|
|
/// See [`ObjectRel`] for more information.
|
2023-01-30 11:27:40 -05:00
|
|
|
|
fn new_rel_dyn(
|
|
|
|
|
ty: ObjectRelTy,
|
|
|
|
|
oi: ObjectIndex<Object>,
|
|
|
|
|
) -> Option<Self::Rel>;
|
2023-01-23 11:40:10 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// A relationship to another [`ObjectKind`].
|
|
|
|
|
///
|
|
|
|
|
/// This trait is intended to be implemented by enums that represent the
|
|
|
|
|
/// subset of [`ObjectKind`]s that are able to serve as edge targets for
|
|
|
|
|
/// the [`ObjectRelatable`] that utilizes it as its
|
|
|
|
|
/// [`ObjectRelatable::Rel`].
|
|
|
|
|
///
|
|
|
|
|
/// As described in the [module-level documentation](super),
|
|
|
|
|
/// the concrete [`ObjectKind`] of an edge is generally not able to be
|
|
|
|
|
/// determined statically outside of code paths that created the
|
|
|
|
|
/// [`Object`] anew.
|
|
|
|
|
/// But we _can_ at least narrow the types of [`ObjectKind`]s to those
|
|
|
|
|
/// [`ObjectRelTo`]s that we know are valid,
|
|
|
|
|
/// since the system is restricted (statically) to those edges when
|
|
|
|
|
/// performing operations on the graph.
|
|
|
|
|
///
|
|
|
|
|
/// This [`ObjectRel`] represents that subset of [`ObjectKind`]s.
|
|
|
|
|
/// A caller may decide to dispatch based on the type of edge it receives,
|
|
|
|
|
/// or it may filter edges with [`Self::narrow`] in conjunction with
|
|
|
|
|
/// [`Iterator::filter_map`]
|
|
|
|
|
/// (for example).
|
|
|
|
|
/// Since the wrapped value is an [`ObjectIndex`],
|
|
|
|
|
/// the system will eventually panic if it attempts to reference a node
|
|
|
|
|
/// that is not of the type expected by the edge,
|
|
|
|
|
/// which can only happen if the edge has an incorrect [`ObjectRelTy`],
|
|
|
|
|
/// meaning the graph is somehow corrupt
|
|
|
|
|
/// (because system invariants were not upheld).
|
|
|
|
|
///
|
|
|
|
|
/// This affords us both runtime memory safety and static guarantees that
|
|
|
|
|
/// the system is not able to generate an invalid graph that does not
|
|
|
|
|
/// adhere to the prescribed ontology,
|
|
|
|
|
/// provided that invariants are properly upheld by the
|
|
|
|
|
/// [`asg`](crate::asg) module.
|
|
|
|
|
pub trait ObjectRel {
|
|
|
|
|
/// Attempt to narrow into the [`ObjectKind`] `OB`.
|
|
|
|
|
///
|
|
|
|
|
/// Unlike [`Object`] nodes,
|
|
|
|
|
/// _this operation does not panic_,
|
|
|
|
|
/// instead returning an [`Option`].
|
|
|
|
|
/// If the relationship is of type `OB`,
|
|
|
|
|
/// then [`Some`] will be returned with an inner
|
|
|
|
|
/// [`ObjectIndex<OB>`](ObjectIndex).
|
|
|
|
|
/// If the narrowing fails,
|
|
|
|
|
/// [`None`] will be returned instead.
|
|
|
|
|
///
|
|
|
|
|
/// This return value is well-suited for [`Iterator::filter_map`] to
|
|
|
|
|
/// query for edges of particular kinds.
|
|
|
|
|
fn narrow<OB: ObjectKind + ObjectRelatable>(
|
|
|
|
|
self,
|
|
|
|
|
) -> Option<ObjectIndex<OB>>;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-10 15:06:24 -05:00
|
|
|
|
/// A container for an [`Object`] allowing for owned borrowing of data.
|
|
|
|
|
///
|
|
|
|
|
/// The purpose of allowing this owned borrowing is to permit a functional
|
|
|
|
|
/// style of object manipulation,
|
|
|
|
|
/// like the rest of the TAMER system,
|
|
|
|
|
/// despite the mutable underpinnings of the ASG.
|
|
|
|
|
/// This is accomplished by wrapping each object in an [`Option`] so that we
|
|
|
|
|
/// can [`Option::take`] its inner value temporarily.
|
|
|
|
|
///
|
|
|
|
|
/// This container has a critical invariant:
|
|
|
|
|
/// the inner [`Option`] must _never_ be [`None`] after a method exits,
|
|
|
|
|
/// no matter what branches are taken.
|
|
|
|
|
/// Methods operating on owned data enforce this invariant by mapping over
|
|
|
|
|
/// data and immediately placing the new value to the container before the
|
|
|
|
|
/// method completes.
|
|
|
|
|
/// This container will panic if this variant is not upheld.
|
|
|
|
|
///
|
|
|
|
|
/// TODO: Make this `pub(super)` when [`Asg`]'s public API is cleaned up.
|
|
|
|
|
#[derive(Debug, PartialEq)]
|
|
|
|
|
pub struct ObjectContainer(Option<Object>);
|
|
|
|
|
|
|
|
|
|
impl ObjectContainer {
|
|
|
|
|
/// Retrieve an immutable reference to the inner [`Object`],
|
|
|
|
|
/// narrowed to expected type `O`.
|
|
|
|
|
///
|
|
|
|
|
/// Panics
|
|
|
|
|
/// ======
|
|
|
|
|
/// This will panic if the object on the graph is not the expected
|
|
|
|
|
/// [`ObjectKind`] `O`.
|
|
|
|
|
pub fn get<O: ObjectKind>(&self) -> &O {
|
|
|
|
|
let Self(container) = self;
|
|
|
|
|
|
2023-01-20 10:45:10 -05:00
|
|
|
|
container
|
|
|
|
|
.as_ref()
|
|
|
|
|
.diagnostic_unwrap(container_oops)
|
|
|
|
|
.as_ref()
|
2023-01-10 15:06:24 -05:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Attempt to modify the inner [`Object`],
|
|
|
|
|
/// narrowed to expected type `O`,
|
|
|
|
|
/// returning any error.
|
|
|
|
|
///
|
|
|
|
|
/// See also [`Self::replace_with`] if the operation is [`Infallible`].
|
|
|
|
|
///
|
|
|
|
|
/// Panics
|
|
|
|
|
/// ======
|
|
|
|
|
/// This will panic if the object on the graph is not the expected
|
|
|
|
|
/// [`ObjectKind`] `O`.
|
|
|
|
|
pub fn try_replace_with<O: ObjectKind, E>(
|
|
|
|
|
&mut self,
|
|
|
|
|
f: impl FnOnce(O) -> Result<O, (O, E)>,
|
|
|
|
|
) -> Result<(), E> {
|
|
|
|
|
let ObjectContainer(container) = self;
|
|
|
|
|
|
2023-01-12 16:17:41 -05:00
|
|
|
|
let obj = container.take().diagnostic_unwrap(container_oops).into();
|
2023-01-10 15:06:24 -05:00
|
|
|
|
|
|
|
|
|
// NB: We must return the object to the container in all code paths!
|
|
|
|
|
let result = f(obj)
|
|
|
|
|
.map(|obj| {
|
|
|
|
|
container.replace(obj.into());
|
|
|
|
|
})
|
|
|
|
|
.map_err(|(orig, err)| {
|
|
|
|
|
container.replace(orig.into());
|
|
|
|
|
err
|
|
|
|
|
});
|
|
|
|
|
|
|
|
|
|
debug_assert!(container.is_some());
|
|
|
|
|
result
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Modify the inner [`Object`],
|
|
|
|
|
/// narrowed to expected type `O`.
|
|
|
|
|
///
|
|
|
|
|
/// See also [`Self::try_replace_with`] if the operation can fail.
|
|
|
|
|
///
|
|
|
|
|
/// Panics
|
|
|
|
|
/// ======
|
|
|
|
|
/// This will panic if the object on the graph is not the expected
|
|
|
|
|
/// [`ObjectKind`] `O`.
|
|
|
|
|
pub fn replace_with<O: ObjectKind>(&mut self, f: impl FnOnce(O) -> O) {
|
|
|
|
|
let _ = self.try_replace_with::<O, (O, Infallible)>(|obj| Ok(f(obj)));
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl<I: Into<Object>> From<I> for ObjectContainer {
|
|
|
|
|
fn from(obj: I) -> Self {
|
|
|
|
|
Self(Some(obj.into()))
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
fn container_oops() -> Vec<AnnotatedSpan<'static>> {
|
|
|
|
|
// This used to be a real span,
|
|
|
|
|
// but since this invariant is easily verified and should absolutely
|
|
|
|
|
// never occur,
|
|
|
|
|
// there's no point in complicating the API.
|
|
|
|
|
let span = UNKNOWN_SPAN;
|
|
|
|
|
|
|
|
|
|
vec![
|
|
|
|
|
span.help("this means that some operation used take() on the object"),
|
|
|
|
|
span.help(" container but never replaced it with an updated object"),
|
|
|
|
|
span.help(
|
|
|
|
|
" after the operation completed, which should not \
|
|
|
|
|
be possible.",
|
|
|
|
|
),
|
|
|
|
|
]
|
|
|
|
|
}
|