Perhaps you can clarify - if you're saying unsafe rust that performs undefined behaviour is unsafe regardless of the safe bits, then you'll have no disagreement. If on the other hand you're suggesting that it's possible to have well defined unsafe rust that exposes a lack of safety across the unsafe boundary, then you're going to have to explain a bit more...
In Rust, casting a *mut to a &mut (the standard method of allowing safe code to mutate an object) is unsafe if other &, &mut, or Pin<&mut> exists, regardless of whether or not you commit a use-after-free. In C++, the equivalent code interchangeably using * and & is safe until you actually commit a use-after-free. This makes it harder to write correct Rust code mixing safe and unsafe code.
It's not just dereferencing invalid */& that's illegal in Rust, but constructing and dereferencing valid &/&mut in ways that don't respect tree-shaped mutability.
I've put together a playground at https://play.rust-lang.org/?version=stable&mode=debug&editio.... "Object graph" architectures are common in C++ and sometimes necessary in Rust when building GUI applications or emulators. Rust's pointer aliasing rules invalidate otherwise-correct code, placing roadblocks in the way of writing correct code. And there's so much creation of &mut (which invalidates aliasing pointers for the duration of the &mut, and invalidates sibling &mut and all pointers constructed from them), that's so implicit I don't know what's legal and what's not by auditing code. (Box<T> used to also invalidate aliasing pointers, but this may be changed. The current plan for enabling self-reference is Pin<&mut T>, but the exact semantics for how and when putting a &mut T in a wrapper struct makes it not invalidate self-reference and incoming pointers, is still not specified.)
I've heard statements that addr_of_mut! is an interim API, and the situation may be improved with &raw and unsafe-deref syntax (https://faultlore.com/blah/fix-rust-pointers/). But I expect a production systems language to be an improvement upon C/C++'s usability in their strongest domains out of the box (much like how Send/Sync as marker types not creating UB beyond C++ makes threading more tractable, and enums are superior to std::variant). Instead today's Rust pointer rules redefines swathes of C and C++'s use cases and design patterns as undefined behavior, and the alternative approaches are ugly to express in safe code (Cell/RefCell), and easy to get wrong and tricky and ugly to get right in unsafe code (see my playground), with the promise that they were trying to make programming easier and are trying to create a suitable replacement someday down the line (7 years and counting after Rust 1.0).
Thanks for the explanation. Self referential structures are indeed problematic.
I'm actually pretty sanguine about the general case of invalidating C/C++ patterns. It doesn't surprise me that some new computer science is going to be necessary to help deal with the edge cases exposed by mainstreaming a new paradigm.
My goal is not to invent new computer science, but to create a language where programming is not "puzzle solving" (https://news.ycombinator.com/item?id=33694124), like needing to work with ghost cells (didn't try firsthand yet) since pointer algorithms are laden with syntactic salt and pitfalls (note that pointers is not C/C++ patterns but fundamental systems programming), working with Rust's num traits that describe the properties of numbers rather than directly saying your FFT code is generic over f32 and f64, hacking in sealed traits using underspecified and disputed visibility rules (attempting to match the type of a sum type at compile time), chaining iterator combinators rather than writing code where program order matches execution order, not being able to index into strings when parsing unless I convert to &[u8] then spend extra O(n) runtime validating the remaining text is valid UTF-8.
From experience, I find that "puzzle solving" tends to result in very good code. I recently had the experience of fundamentally changing the design of a decent chunk of code (think maybe 3 weeks work) for which I had integration tests that were comprehensive. Once I had the rust compiling, the tests passed first time. I've had this sort of experience in Rust several times, and never in other languages (primarily C and Python). It wasn't a trivial implementation in Rust - I had to think carefully how to craft it to make effective use of Rust idioms, but the result is great.
It feels to me that the question comes down to whether changing paradigms is worth it. I have found overwhelmingly that it has been and with my hard won battle scars, I'll stay here.
What I see as Rust's advances in programming experience (I assume you also like enums with payloads, bounds checking by default, structured threading to avoid C++'s "data races by default", fat pointers over storing vtables in objects, the hashmap Entry API) should be able to stand on their own. I want to be able to use them in a language which doesn't obstruct me from applying working patterns in areas (general single-threaded memory lifetime management) where I rarely make mistakes (unlike with threading), and find Rust's current options to be regressions (only offering easy access to special cases like Box/Rc/Arc/RefCell, I can write imperative code with loops and matches, but the community largely juggles higher-order functions into Option/Result/Iterator methods, which leak upon contact with side effects). Though I'm unsure if you find the asepcts I criticize to be positives, or merely less of an issue than I see it.
Separately, I've been loosely following gcc-rs's development, which has uncovered some surprising hidden complexity in rustc's operation leaking into the Rust language's behavior. For example, for loops "requires Iterators that will need generics and traits to be implemented first" (https://thephilbert.io/2021/02/15/gcc-rust-weekly-status-rep...) and these abstractions likely don't get optimized away in debug builds, and resolving method calls can take dozens of steps evaluating Deref and adding and removing & and &mut (https://thephilbert.io/2022/01/31/gcc-rust-weekly-status-rep...). I think that bidirectional type inference (one time extracting a lambda from a method argument to a variable broke argument inference), complex method call resolution algorithms, and defining the semantics of code (not just validity as with borrow checking) through complex trait resolver algorithms now being reimplemented in Prolog (Chalk and possibly datafrog), collectively do Rust a disservice in fulfilling the role of a transparent, explicit, well-specified language.
For now I'm just waiting for the language I'm hoping for (whether or not it's a language you want to use). My fear is that the inertia and resources of C and C++ on one end, and Rust and pcwalton calling Zig with general-case memory management a "massive step backwards for the industry" on the other end, have drained away available resources from a language (Zig, Nim?, Hare?, etc.) which uses abstraction and higher-order functions sparingly in places where they resolve problems without causing harm, rather than making them the most viable option in the language by crippling alternatives.
Interesting. I think my perspective is that the handrails are so useful that I can live with the few cases they get in the way, because it's rare they're actually a performance sticking point. Indeed, the handrails (at least in principle) can add much in terms of performance with e.g. the strong no-alias guarantees and iterator semantics. I guess if you spend all your time writing complicated graph based data structures over pointers you might have a different opinion!