mirror of
https://github.com/rust-lang/rust-analyzer.git
synced 2026-03-30 20:49:32 +00:00
rust-analyzer allows AI usage (see #21314), but requires contributors to declare usage. This adds a rule file that improves LLM output quality and instructs the LLM to declare usage in commit messages. I've written the rules in CLAUDE.md, but also symlinked it to AGENTS.md so other LLM tools pick it up. ## Rules file contents (1) Instructions for both humans and AIs to declare AI usage. (2) Relevant commands for testing, linting and codegen. Note that I deliberately didn't include an overview of the project structure on a folder-by-folder basis. This can go stale, and there's some evidence that project structure can hurt LLM output quality overall. See the following paper: > Evaluating AGENTS.md: > Are Repository-Level Context Files Helpful for Coding Agents? > https://arxiv.org/pdf/2602.11988 ## Testing I exercised this change with the following contrived prompt. Note that in practice rust-analyzer is hitting review scaling limits for new code actions, but it was easy to test end-to-end. > Add a new code action that replaces the content of a string literal > with the text "banana". ... > commit it This produced a functional code action with both Codex and Claude, and in both cases the commit message mentioned that it was AI generated. Example commit message: Add "Replace string with banana" code action Add a new assist that replaces a string literal's content with "banana" when the cursor is on a STRING token. AI: Generated with Claude Code (claude-opus-4-6). I confirmed that the code action worked by testing a rust-analyzer build in Emacs, and also confirmed that the generated tests looked sensible. ## AI Usage Disclosures I wrote the first draft of the rules file with Opus 4.6, manually reviewed everything.
1.8 KiB
1.8 KiB
Reminder: All AI usage must be disclosed in commit messages, see CONTRIBUTING.md for more details.
Build Commands
cargo build # Build all crates
cargo test # Run all tests
cargo test -p <crate> # Run tests for a specific crate (e.g., cargo test -p hir-ty)
cargo lint # Run clippy on all targets
cargo xtask codegen # Run code generation
cargo xtask tidy # Run tidy checks
UPDATE_EXPECT=1 cargo test # Update test expectations (snapshot tests)
RUN_SLOW_TESTS=1 cargo test # Run heavy/slow tests
Key Architectural Invariants
- Typing in a function body never invalidates global derived data
- Parser/syntax tree is built per-file to enable parallel parsing
- The server is stateless (HTTP-like); context must be re-created from request parameters
- Cancellation uses salsa's cancellation mechanism; computations panic with a
Cancelledpayload
Code Generation
Generated code is committed to the repo. Grammar and AST are generated from ungrammar. Run cargo test -p xtask after adding inline parser tests (// test test_name comments).
Testing
Tests are snapshot-based using expect-test. Test fixtures use a mini-language:
$0marks cursor position// ^^^^labels attach to the line above//- minicore: sized, fnincludes parts of minicore (minimal core library)//- /path/to/file.rs crate:name deps:dep1,dep2declares files/crates
Style Notes
- Use
stdx::never!andstdx::always!instead ofassert!for recoverable invariants - Use
T![fn]macro instead ofSyntaxKind::FN_KW - Use keyword name mangling over underscore prefixing for identifiers:
crate→krate,fn→func,struct→strukt,type→ty