tame/tamer/src/xir/attr.rs

295 lines
8.1 KiB
Rust
Raw Normal View History

// XIRT attributes
//
// Copyright (C) 2014-2023 Ryan Specialty, LLC.
//
// This file is part of TAME.
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
//! XIRT attributes.
//!
//! Attributes are represented by [`Attr`].
//!
//! See [parent module](super) for additional documentation.
tamer: xir::tree::attr_parser_from: Integrate AttrParser This begins to integrate the isolated AttrParser. The next step will be integrating it into the larger XIRT parser. There's been considerable delay in getting this committed, because I went through quite the struggle with myself trying to determine what balance I want to strike between Rust's type system; convenience with parser combinators; iterators; and various other abstractions. I ended up being confounded by trying to maintain the current XmloReader abstraction, which is fundamentally incompatible with the way the new parsing system works (streaming iterators that do not collect or perform heap allocations). There'll be more information on this to come, but there are certain things that will be changing. There are a couple problems highlighted by this commit (not in code, but conceptually): 1. Introducing Option here for the TokenParserState doesn't feel right, in the sense that the abstraction is inappropriate. We should perhaps introduce a new variant Parsed::Done or something to indicate intent, rather than leaving the reader to have to read about what None actually means. 2. This turns Parsed into more of a statement influencing control flow/logic, and so should be encapsulated, with an external equivalent of Parsed that omits variants that ought to remain encapsulated. 3. TokenStreamState is true, but these really are the actual parsers; TokenStreamParser is more of a coordinator, and helps to abstract away some of the common logic so lower-level parsers do not have to worry about it. But calling it TokenStreamState is both a bit confusing and is an understatement---it _does_ hold the state, but it also holds the current parsing stack in its variants. Another thing that is not yet entirely clear is whether this AttrParser ought to care about detection of duplicate attributes, or if that should be done in a separate parser, perhaps even at the XIR level. The same can be said for checking for balanced tags. By pushing it to TokenStream in XIR, we would get a guaranteed check regardless of what parsers are used, which is attractive because it reduces the (almost certain-to-otherwise-occur) risk that individual parsers will not sufficiently check for semantically valid XML. But it does _potentially_ match error recovery more complicated. But at the same time, perhaps more specific parsers ought not care about recovery at that level. Anyway, point being, more to come, but I am disappointed how much time I'm spending considering parsing, given that there are so many things I need to move onto. I just want this done right and in a way that feels like it's working well with Rust while it's all in working memory, otherwise it's going to be a significant effort to get back into. DEV-11268
2021-12-10 14:13:02 -05:00
mod parse;
use super::QName;
use crate::{
parse::Token,
span::{Span, SpanLenSize},
sym::SymbolId,
};
2021-11-23 13:05:10 -05:00
use std::fmt::Display;
pub use parse::{AttrParseError, AttrParseState};
tamer: xir:tree: Begin work on composable XIRT parser The XIRT parser was initially written for test cases, so that unit tests should assert more easily on generated token streams (XIR). While it was planned, it wasn't clear what the eventual needs would be, which were expected to differ. Indeed, loading everything into a generic tree representation in memory is not appropriate---we should prefer streaming and avoiding heap allocations when they’re not necessary, and we should parse into an IR rather than a generic format, which ensures that the data follow a proper grammar and are semantically valid. When parsing attributes in an isolated context became necessary for the aforementioned task, the state machine of the XIRT parser was modified to accommodate. The opposite approach should have been taken---instead of adding complexity and special cases to the parser, and from a complex parser extracting a simple one (an attribute parser), we should be composing the larger (full XIRT) parser from smaller ones (e.g. attribute, child elements). A combinator, when used in a functional sense, refers not to combinatory logic but to the composition of more complex systems from smaller ones. The changes made as part of this commit begin to work toward combinators, though it's not necessarily evident yet (to you, the reader) how that'll work, since the code for it hasn't yet been written; this is commit is simply getting my work thusfar introduced so I can do some light refactoring before continuing on it. TAMER does not aim to introduce a parser combinator framework in its usual sense---it favors, instead, striking a proper balance with Rust’s type system that permits the convenience of combinators only in situations where they are needed, to avoid having to write new parser boilerplate. Specifically: 1. Rust’s type system should be used as combinators, so that parsers are automatically constructed from the type definition. 2. Primitive parsers are written as explicit automata, not as primitive combinators. 3. Parsing should directly produce IRs as a lowering operation below XIRT, rather than producing XIRT itself. That is, target IRs should consume XIRT and produce parse themselves immediately, during streaming. In the future, if more combinators are needed, they will be added; maybe this will eventually evolve into a more generic parser combinator framework for TAME, but that is certainly a waste of time right now. And, to be honest, I’m hoping that won’t be necessary.
2021-12-06 11:26:53 -05:00
/// Element attribute.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Attr(pub QName, pub SymbolId, pub AttrSpan);
/// Spans associated with attribute key and value.
///
/// The diagram below illustrates the behavior of `AttrSpan`.
/// Note that the extra spaces surrounding the `=` are intentional to
/// illustrate what the behavior ought to be.
/// Spans are represented by `[---]` intervals,
/// with the byte offset at each end,
/// and the single-letter span name centered below the interval.
/// `+` represents intersecting `-` and `|` lines.
///
/// ```text
/// <foo bar = "baz" />
/// [-] [+-+]
/// 5 7 13| |17
/// |K |Q||
/// | | ||
/// | [-]|
/// | 14 16
/// | V |
/// [-----------]
/// A
/// ```
///
/// Above we have
///
/// - `A` = [`AttrSpan::span`];
/// - `K` = [`AttrSpan::key_span`];
/// - `V` = [`AttrSpan::value_span`]; and
/// - `Q` = [`AttrSpan::value_span_with_quotes`].
///
/// Note that this object assumes that the key and value span are adjacent
/// to one-another in the same [`span::Context`](crate::span::Context).
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct AttrSpan(pub Span, pub Span);
impl AttrSpan {
/// A [`Span`] covering the entire attribute token,
/// including the key,
/// _quoted_ value,
/// and everything in-between.
pub fn span(&self) -> Span {
let AttrSpan(k, _) = self;
// TODO: Move much of this into `Span`.
k.context().span(
k.offset(),
self.value_span_with_quotes()
.endpoints_saturated()
.1
.offset()
.saturating_sub(k.offset())
.try_into()
.unwrap_or(SpanLenSize::MAX),
)
}
/// The span associated with the name of the key.
///
/// This does _not_ include the following `=` or any surrounding
/// whitespace.
pub fn key_span(&self) -> Span {
let AttrSpan(k, _) = self;
*k
}
/// The span associated with the string value _inside_ the quotes,
/// not including the quotes themselves.
///
/// See [`AttrSpan`]'s documentation for an example.
pub fn value_span(&self) -> Span {
let AttrSpan(_, v) = self;
*v
}
/// The span associated with the string value _including_ the
/// surrounding quotes.
///
/// See [`AttrSpan`]'s documentation for an example.
pub fn value_span_with_quotes(&self) -> Span {
let AttrSpan(_, v) = self;
v.context()
.span(v.offset().saturating_sub(1), v.len().saturating_add(2))
}
}
impl Attr {
/// Construct a new simple attribute with a name, value, and respective
/// [`Span`]s.
#[inline]
pub fn new(name: QName, value: SymbolId, span: (Span, Span)) -> Self {
Self(name, value, AttrSpan(span.0, span.1))
}
/// Attribute name.
#[inline]
pub fn name(&self) -> QName {
self.0
}
/// Retrieve the value from the attribute.
///
/// Since [`SymbolId`] implements [`Copy`],
/// this returns an owned value.
#[inline]
pub fn value(&self) -> SymbolId {
self.1
}
tamer: xir::parse::ele: Initial element parser generator concept This begins generating parsers that are capable of parsing elements. I need to move on, so this abstraction isn't going to go as far as it could, but let's see where it takes me. This was the work that required the recent lookahead changes, which has been detailed in previous commits. This initial support is basic, but robust. It supports parsing elements with attributes and children, but it does not yet support the equivalent of the Kleene star (`*`). Such support will likely be added by supporting parsers that are able to recurse on their own definition in tail position, which will also require supporting parsers that do not add to the stack. This generates parsers that, like all the other parsers, use enums to provide a typed stack. Stitched parsers produce a nested stack that is always bounded in size. Fortunately, expressions---which can nest deeply---do not need to maintain ancestor context on the stack, and so this should work fine; we can get away with this because XIRF ensures proper nesting for us. Statements that _do_ need to maintain such context are not nested. This also does not yet support emitting an object on closing tag, which will be necessary for NIR, which will be a streaming IR that is "near" to the source XML in structure. This will then be used to lower into AIR for the ASG, which gives structure needed for further analysis. More information to come; I just want to get this committed to serve as a mental synchronization point and clear my head, since I've been sitting on these changes for so long and have to keep stashing them as I tumble down rabbit holes covered in yak hair. DEV-7145
2022-07-13 13:55:32 -04:00
/// [`AttrSpan`] for this attribute.
///
/// The attribute span allows deriving a number of different spans;
/// see [`AttrSpan`] for more information.
pub fn attr_span(&self) -> &AttrSpan {
match self {
Attr(.., span) => span,
}
}
}
impl Token for Attr {
tamer: parser::Parser: cfg(test) tracing This produces useful parse traces that are output as part of a failing test case. The parser generator macros can be a bit confusing to deal with when things go wrong, so this helps to clarify matters. This is _not_ intended to be machine-readable, but it does show that it would be possible to generate machine-readable output to visualize the entire lowering pipeline. Perhaps something for the future. I left these inline in Parser::feed_tok because they help to elucidate what is going on, just by reading what the trace would output---that is, it helps to make the method more self-documenting, albeit a tad bit more verbose. But with that said, it should probably be extracted at some point; I don't want this to set a precedent where composition is feasible. Here's an example from test cases: [Parser::feed_tok] (input IR: XIRF) | ==> Parser before tok is parsing attributes for `package`. | | Attrs_(SutAttrsState_ { ___ctx: (QName(None, LocalPart(NCName(SymbolId(46 "package")))), OpenSpan(Span { len: 0, offset: 0, ctx: Context(SymbolId(1 "#!DUMMY")) }, 10)), ___done: false }) | | ==> XIRF tok: `<unexpected>` | | Open(QName(None, LocalPart(NCName(SymbolId(82 "unexpected")))), OpenSpan(Span { len: 0, offset: 1, ctx: Context(SymbolId(1 "#!DUMMY")) }, 10), Depth(1)) | | ==> Parser after tok is expecting opening tag `<classify>`. | | ChildA(Expecting_) | | Lookahead: Some(Lookahead(Open(QName(None, LocalPart(NCName(SymbolId(82 "unexpected")))), OpenSpan(Span { len: 0, offset: 1, ctx: Context(SymbolId(1 "#!DUMMY")) }, 10), Depth(1)))) = note: this trace was output as a debugging aid because `cfg(test)`. [Parser::feed_tok] (input IR: XIRF) | ==> Parser before tok is expecting opening tag `<classify>`. | | ChildA(Expecting_) | | ==> XIRF tok: `<unexpected>` | | Open(QName(None, LocalPart(NCName(SymbolId(82 "unexpected")))), OpenSpan(Span { len: 0, offset: 1, ctx: Context(SymbolId(1 "#!DUMMY")) }, 10), Depth(1)) | | ==> Parser after tok is attempting to recover by ignoring element with unexpected name `unexpected` (expected `classify`). | | ChildA(RecoverEleIgnore_(QName(None, LocalPart(NCName(SymbolId(82 "unexpected")))), OpenSpan(Span { len: 0, offset: 1, ctx: Context(SymbolId(1 "#!DUMMY")) }, 10), Depth(1))) | | Lookahead: None = note: this trace was output as a debugging aid because `cfg(test)`. DEV-7145
2022-07-18 14:32:34 -04:00
fn ir_name() -> &'static str {
// This may be used by multiple things,
// but it's primarily used by XIRF.
"XIRF"
}
fn span(&self) -> Span {
tamer: Refactor asg_builder into obj::xmlo::lower and asg::air This finally uses `parse` all the way up to aggregation into the ASG, as can be seen by the mess in `poc`. This will be further simplified---I just need to get this committed so that I can mentally get it off my plate. I've been separating this commit into smaller commits, but there's a point where it's just not worth the effort anymore. I don't like making large changes such as this one. There is still work to do here. First, it's worth re-mentioning that `poc` means "proof-of-concept", and represents things that still need a proper home/abstraction. Secondly, `poc` is retrieving the context of two parsers---`LowerContext` and `Asg`. The latter is desirable, since it's the final aggregation point, but the former needs to be eliminated; in particular, packages need to be worked into the ASG so that `found` can be removed. Recursively loading `xmlo` files still happens in `poc`, but the compiler will need this as well. Once packages are on the ASG, along with their state, that responsibility can be generalized as well. That will then simplify lowering even further, to the point where hopefully everything has the same shape (once final aggregation has an abstraction), after which we can then create a final abstraction to concisely stitch everything together. Right now, Rust isn't able to infer `S` for `Lower<S, LS>`, which is unfortunate, but we'll be able to help it along with a more explicit abstraction. DEV-11864
2022-05-27 13:51:29 -04:00
match self {
Attr(.., attr_span) => attr_span.span(),
tamer: Refactor asg_builder into obj::xmlo::lower and asg::air This finally uses `parse` all the way up to aggregation into the ASG, as can be seen by the mess in `poc`. This will be further simplified---I just need to get this committed so that I can mentally get it off my plate. I've been separating this commit into smaller commits, but there's a point where it's just not worth the effort anymore. I don't like making large changes such as this one. There is still work to do here. First, it's worth re-mentioning that `poc` means "proof-of-concept", and represents things that still need a proper home/abstraction. Secondly, `poc` is retrieving the context of two parsers---`LowerContext` and `Asg`. The latter is desirable, since it's the final aggregation point, but the former needs to be eliminated; in particular, packages need to be worked into the ASG so that `found` can be removed. Recursively loading `xmlo` files still happens in `poc`, but the compiler will need this as well. Once packages are on the ASG, along with their state, that responsibility can be generalized as well. That will then simplify lowering even further, to the point where hopefully everything has the same shape (once final aggregation has an abstraction), after which we can then create a final abstraction to concisely stitch everything together. Right now, Rust isn't able to infer `S` for `Lower<S, LS>`, which is unfortunate, but we'll be able to help it along with a more explicit abstraction. DEV-11864
2022-05-27 13:51:29 -04:00
}
}
}
impl crate::parse::Object for Attr {}
2021-11-23 13:05:10 -05:00
impl Display for Attr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Do not display value since it can contain any information and
// mess up formatted output.
// If we wish to display that information in the future,
// then we ought to escape and elide it,
// but we must furthermore make sure that it makes sense in all
// contexts;
// many diagnostic messages today expect that outputting an
// attribute will output the name of that attribute and
// nothing more.
match self {
Self(key, _value, _) => write!(f, "@{key}"),
}
2021-11-23 13:05:10 -05:00
}
}
/// List of attributes.
///
/// Attributes are ordered in XIR so that this IR will be suitable for code
/// formatters and linters.
///
/// This abstraction will allow us to manipulate the internal data so that
/// it is suitable for a particular task in the future
/// (e.g. O(1) lookups by attribute name).
#[derive(Debug, Clone, Eq, PartialEq, Default)]
pub struct AttrList {
attrs: Vec<Attr>,
}
impl AttrList {
/// Construct a new, empty attribute list.
pub fn new() -> Self {
Self { attrs: vec![] }
}
/// Add an attribute to the end of the attribute list.
pub fn push(mut self, attr: Attr) -> Self {
self.attrs.push(attr);
self
}
tamer: Replace ParseStatus::Dead with generic lookahead Oh what a tortured journey. I had originally tried to avoid formalizing lookahead for all parsers by pretending that it was only needed for dead state transitions (that is---states that have no transitions for a given input token), but then I needed to yield information for aggregation. So I added the ability to override the token for `Dead` to yield that, in addition to the token. But then I also needed to yield lookahead for error conditions. It was a mess that didn't make sense. This eliminates `ParseStatus::Dead` entirely and fully integrates the lookahead token in `Parser` that was previously implemented. Notably, the lookahead token is encapsulated in `TransitionResult` and unavailable to `ParseState` implementations, forcing them to rely on `Parser` for recursion. This not only prevents `ParseState` from recursing, but also simplifies delegation by removing the need to manually handle tokens of lookahead. The awkward case here is XIRT, which does not follow the streaming parsing convention, because it was conceived before the parsing framework. It needs to go away, but doing so right now would be a lot of work, so it has to stick around for a little bit longer until the new parser generators can be used instead. It is a persistent thorn in my side, going against the grain. `Parser` will immediately recurse if it sees a token of lookahead with an incomplete parse. This is because stitched parsers will frequently yield a dead state indication when they're done parsing, and there's no use in propagating an `Incomplete` status down the entire lowering pipeline. But, that does mean that the toplevel is not the only thing recursing. _But_, the behavior doesn't really change, in the sense that it would infinitely recurse down the entire lowering stack (though there'd be an opportunity to detect that). This should never happen with a correct parser, but it's not worth the effort right now to try to force such a thing with Rust's type system. Something like TLA+ is better suited here as an aid, but it shouldn't be necessary with clear implementations and proper test cases. Parser generators will also ensure such a thing cannot occur. I had hoped to remove ParseStatus entirely in favor of Parsed, but there's a lot of type inference that happens based on the fact that `ParseStatus` has a `ParseState` type parameter; `Parsed` has only `Object`. It is desirable for a public-facing `Parsed` to not be tied to `ParseState`, since consumers need not be concerned with such a heavy type; however, we _do_ want that heavy type internally, as it carries a lot of useful information that allows for significant and powerful type inference, which in turn creates expressive and convenient APIs. DEV-7145
2022-07-11 23:49:57 -04:00
pub fn extend<T: IntoIterator<Item = Attr>>(mut self, iter: T) -> Self {
self.attrs.extend(iter);
self
}
/// Search for an attribute of the given `name`.
///
/// _You should use this method only when a linear search makes sense._
///
/// This performs an `O(n)` linear search in the worst case.
/// Future implementations may perform an `O(1)` lookup under certain
/// circumstances,
/// but this should not be expected.
pub fn find(&self, name: QName) -> Option<&Attr> {
self.attrs.iter().find(|attr| attr.name() == name)
}
/// Returns [`true`] if the list contains no attributes.
pub fn is_empty(&self) -> bool {
self.attrs.is_empty()
}
}
impl From<Vec<Attr>> for AttrList {
fn from(attrs: Vec<Attr>) -> Self {
AttrList { attrs }
}
}
tamer: xir::tree::attr_parser_from: Integrate AttrParser This begins to integrate the isolated AttrParser. The next step will be integrating it into the larger XIRT parser. There's been considerable delay in getting this committed, because I went through quite the struggle with myself trying to determine what balance I want to strike between Rust's type system; convenience with parser combinators; iterators; and various other abstractions. I ended up being confounded by trying to maintain the current XmloReader abstraction, which is fundamentally incompatible with the way the new parsing system works (streaming iterators that do not collect or perform heap allocations). There'll be more information on this to come, but there are certain things that will be changing. There are a couple problems highlighted by this commit (not in code, but conceptually): 1. Introducing Option here for the TokenParserState doesn't feel right, in the sense that the abstraction is inappropriate. We should perhaps introduce a new variant Parsed::Done or something to indicate intent, rather than leaving the reader to have to read about what None actually means. 2. This turns Parsed into more of a statement influencing control flow/logic, and so should be encapsulated, with an external equivalent of Parsed that omits variants that ought to remain encapsulated. 3. TokenStreamState is true, but these really are the actual parsers; TokenStreamParser is more of a coordinator, and helps to abstract away some of the common logic so lower-level parsers do not have to worry about it. But calling it TokenStreamState is both a bit confusing and is an understatement---it _does_ hold the state, but it also holds the current parsing stack in its variants. Another thing that is not yet entirely clear is whether this AttrParser ought to care about detection of duplicate attributes, or if that should be done in a separate parser, perhaps even at the XIR level. The same can be said for checking for balanced tags. By pushing it to TokenStream in XIR, we would get a guaranteed check regardless of what parsers are used, which is attractive because it reduces the (almost certain-to-otherwise-occur) risk that individual parsers will not sufficiently check for semantically valid XML. But it does _potentially_ match error recovery more complicated. But at the same time, perhaps more specific parsers ought not care about recovery at that level. Anyway, point being, more to come, but I am disappointed how much time I'm spending considering parsing, given that there are so many things I need to move onto. I just want this done right and in a way that feels like it's working well with Rust while it's all in working memory, otherwise it's going to be a significant effort to get back into. DEV-11268
2021-12-10 14:13:02 -05:00
impl FromIterator<Attr> for AttrList {
fn from_iter<T: IntoIterator<Item = Attr>>(iter: T) -> Self {
iter.into_iter().collect::<Vec<Attr>>().into()
}
}
impl<const N: usize> From<[Attr; N]> for AttrList {
fn from(attrs: [Attr; N]) -> Self {
AttrList {
attrs: attrs.into(),
}
}
}
#[cfg(test)]
mod test {
use crate::span::dummy::DUMMY_CONTEXT as DC;
use super::*;
// See docblock for [`AttrSpan`].
const A: Span = DC.span(5, 13); // Entire attribute token
const K: Span = DC.span(5, 3); // Key
const V: Span = DC.span(14, 3); // Value without quotes
const Q: Span = DC.span(13, 5); // Value with quotes
#[test]
fn attr_span_token() {
assert_eq!(AttrSpan(K, V).span(), A);
}
#[test]
fn attr_span_value_with_quotes() {
assert_eq!(AttrSpan(K, V).value_span_with_quotes(), Q);
}
#[test]
fn attr_span_key() {
assert_eq!(AttrSpan(K, V).key_span(), K);
}
#[test]
fn attr_span_value() {
assert_eq!(AttrSpan(K, V).value_span(), V);
}
}