# CodeReviewr — Full Content > AI-powered code reviews with active-developer pricing. Only pay for developers who actually ship code. Source: https://codereviewr.app/llms.txt --- # Documentation ## Getting started with CodeReviewr URL: https://codereviewr.app/docs/getting-started Description: Set up CodeReviewr in under 60 seconds and start getting AI-powered code reviews on your pull requests Get AI code reviews on every pull request. No waiting for reviewers, no context switching, just fast feedback when you need it. ## Sign up with GitHub Visit [codereviewr.app](https://codereviewr.app) and sign in with GitHub. We use GitHub OAuth—no separate account needed. ## Install the GitHub App After signing in, go to Settings and click "Install GitHub App." Choose your personal account or organization, then select which repositories to review. Installation takes about 30 seconds. ## Automatic reviews Create a pull request in any connected repository. CodeReviewr automatically: 1. Detects the PR via GitHub webhooks 2. Analyzes the diff using AI 3. Posts review comments on the PR 4. Reviews only new changes (incremental reviews stay focused on new changes) Reviews happen automatically—no setup required beyond installation. You can also trigger reviews manually or chat with the bot as described below. ## Chat and commands After a review, you can interact with CodeReviewr directly in PR comments by mentioning it: ``` @codereviewr Why did you flag this as a security issue? ``` You can also trigger actions with slash commands: - `/review` — manually trigger a full code review - `/help` — see all available commands ## Review categories Issues are categorized by severity: - **Critical**: Security vulnerabilities, production-breaking bugs - **High**: Major functionality issues, performance problems - **Medium**: Bugs that impact UX, technical debt - **Low**: Code style, minor improvements - **Info**: Suggestions and best practices Each issue includes the file path, line number, and explanation. CodeReviewr focuses on meaningful problems—security, bugs, performance—not style nitpicks. ## Limitations CodeReviewr works best for PRs under 300KB of changes. Very large PRs (generated code, massive refactors) may be skipped. Split large changes into smaller PRs anyway—it's better practice. We currently support GitHub only. Reviews happen automatically and usually complete within seconds for small PRs. ## Managing your plan Reviews are included in your plan. You can monitor activity per PR in your workspace dashboard, and uninstall CodeReviewr from repos you don't want reviewed. ## Next steps Create a test PR and watch CodeReviewr work. Review a few PRs, check your usage dashboard, then adjust your workflow as needed. If something doesn't work, check our troubleshooting guide or contact us through the website. ## Alternative: CLI reviews Prefer reviewing code locally before pushing? Use the CodeReviewr CLI to get the same AI-powered reviews in your terminal. ```bash npm install -g @codereviewr/cli codereviewr auth login cr_your_token codereviewr review ``` The CLI is useful for: - Pre-commit or pre-push reviews - CI/CD pipeline integration - Reviewing changes before creating a PR See the [CLI documentation](/docs/cli) for full setup and usage instructions. ## Command Line Interface (CLI) URL: https://codereviewr.app/docs/cli Description: Review code locally from your terminal before pushing Review your code changes locally before pushing. The CLI gives you the same AI-powered reviews as the GitHub App, but in your terminal. ## Installation ```bash npm install -g @codereviewr/cli ``` ## Authentication Get your API token from the [CodeReviewr dashboard](https://codereviewr.app) under Settings → API Tokens. ```bash codereviewr auth login cr_your_token_here ``` Tokens start with `cr_`. The token is stored in `~/.codereviewr/config.json`. Check your authentication status: ```bash codereviewr auth status ``` ## Reviewing code Review all uncommitted changes in your current repository: ```bash codereviewr review ``` The CLI analyzes your changes and prints issues directly to the terminal, categorized by severity (Critical, High, Medium, Low, Info). ### Compare against a branch Review changes compared to a specific branch, commit, or tag: ```bash # Compare against main branch codereviewr review --ref main # Compare against a specific commit codereviewr review --ref abc123 # Compare against a tag codereviewr review --ref v1.0.0 ``` ### Review only staged changes ```bash codereviewr review --staged ``` ### Verbose output See detailed debug information including which files are being read for context: ```bash codereviewr review --verbose ``` ## Exit codes The CLI returns exit codes based on review results, making it useful for CI/CD pipelines and pre-commit hooks: - `0` - No critical or high severity issues found - `1` - High severity issues found - `2` - Critical severity issues found Example in a CI pipeline: ```bash codereviewr review --ref main || exit 1 ``` ## Configuration ### Max iterations The review process may need multiple iterations to gather context. Default is 25 iterations. Increase for complex codebases: ```bash # Set globally codereviewr auth set-max-iterations 50 # Or per-review codereviewr review --max-iterations 50 ``` ### Custom API URL For development or self-hosted instances: ```bash codereviewr auth set-url http://localhost:5174 ``` ### Remove authentication ```bash codereviewr auth logout ``` ## Use cases ### Pre-commit reviews Run a quick review before committing: ```bash codereviewr review --staged ``` ### Pre-push reviews Review all changes on your branch before pushing: ```bash codereviewr review --ref main ``` ### CI/CD integration Add to your CI pipeline to catch issues before merge: ```yaml # GitHub Actions example - name: Code Review run: | npm install -g @codereviewr/cli codereviewr auth login ${{ secrets.CODEREVIEWR_TOKEN }} codereviewr review --ref ${{ github.base_ref }} ``` ## Command reference ### `codereviewr review` | Option | Description | |--------|-------------| | `-r, --ref ` | Compare against a specific git ref | | `--staged` | Only review staged changes | | `-v, --verbose` | Show verbose debug output | | `--max-iterations ` | Maximum review iterations (default: 25) | ### `codereviewr auth` | Command | Description | |---------|-------------| | `login ` | Set your API token | | `logout` | Remove your API token | | `status` | Show authentication status | | `set-url ` | Set custom API URL | | `set-max-iterations ` | Set default max iterations | ### Global options | Option | Description | |--------|-------------| | `-v, --verbose` | Enable verbose logging for any command | ## Understanding the review flow URL: https://codereviewr.app/docs/review-flow Description: Learn how CodeReviewr processes pull requests, from webhook to GitHub comments You push code, create a PR, and CodeReviewr starts reviewing it automatically. Here's what happens behind the scenes. ## Webhook trigger When you create a pull request or push commits, GitHub sends a webhook to CodeReviewr. We listen for: - **PR opened**: Full review of all changes - **PR synchronized**: Incremental review of new changes only - **PR reopened**: Full review again Webhooks fire within seconds. No polling, no delays. ## Job queuing Webhooks enqueue a review job in our queue system. This enables reliability, rate limiting, and fair processing of multiple PRs. ## Fetching PR data The review worker fetches PR details from GitHub: title, description, diff, and commit history. For incremental reviews (new commits pushed), we only fetch the diff since the last reviewed commit. This is the key to cost efficiency—no re-analyzing unchanged code. ## Size limits Pull requests over 300KB of changes get special handling. Very large PRs often include generated code, dependency updates, or massive refactors. CodeReviewr posts a comment explaining the PR is too large for automated review. Split it into smaller PRs for better feedback anyway. The limit balances AI context windows with practical review sizes. Most productive PRs are much smaller. ## AI analysis The diff gets sent to our AI review system. We use sophisticated model selection and optimization techniques to provide the best reviews at the lowest cost. Our system: 1. Analyzes the diff line by line 2. Gathers context when needed (reads files, explores directories) 3. Identifies issues (security, bugs, performance, quality) 4. Categorizes severity (Critical, High, Medium, Low, Info) 5. Marks solved issues from previous reviews We've tuned our prompts and workflows to emphasize meaningful problems over style nitpicks. Security issues, bugs, and performance problems get priority. ## Issue creation and posting For each issue found, the system creates a review issue with file path, line number, severity, category, and description. Issues are stored and automatically posted as PR review comments on GitHub. If no issues are found, CodeReviewr posts a summary comment saying the PR looks good. ## Incremental reviews When you push new commits to an existing PR: 1. We fetch only the diff since the last reviewed commit 2. The system analyzes just the new changes 3. New issues are posted 4. Fixed issues are marked solved and their GitHub comments are resolved This keeps reviews focused on what actually changed. ## Draft PR summarization When a pull request is opened or updated in draft state, CodeReviewr generates an AI-powered title and description for it and writes them back to the PR on GitHub. This helps you frame the work-in-progress before sharing it for review. A full code review still runs alongside the summarization—draft status does not skip the review. ## PR chat and commands You can interact with CodeReviewr directly in PR comments by mentioning the app (e.g. `@codereviewr`). Two modes are supported: **Chat**: Ask any question about the PR or the codebase. The AI reads the relevant files and replies as a comment. ``` @codereviewr What are the security implications of this change? ``` **Slash commands**: Trigger specific actions. | Command | Description | |---------|-------------| | `/review` | Trigger a full code review immediately | | `/help` | List all available commands | Chat messages and commands are tracked in your workspace dashboard. ## Error handling If something goes wrong (API timeout, network issue), the job retries automatically. After multiple failed attempts, the job is marked as failed and visible in your dashboard. ## The bottom line The flow is straightforward: webhook → queue → analyze → post. The complexity is in incremental reviews, issue tracking, and our optimization techniques. But from your perspective, it's just push code and get feedback. ## Supported tools and capabilities URL: https://codereviewr.app/docs/supported-tools Description: Learn what CodeReviewr can analyze and what AI tools it uses during code reviews CodeReviewr uses advanced AI to analyze your code. The AI doesn't have direct repository access—instead, it intelligently gathers context from your codebase when needed. This approach helps the AI understand code patterns, check for related issues, and provide context-aware feedback that actually makes sense for your codebase. ## What it can do **File reading**: The AI reads repository files when it needs context—to understand imported functions, check type definitions, or review related code patterns. This applies to both automated PR reviews and PR chat responses. **Directory exploration**: It explores repository structure to find related files, check if tests exist, and understand project organization. **Issue tracking**: It tracks issues across review cycles. When you fix an issue from a previous review, it's marked as resolved. This enables incremental reviews—only new changes are analyzed, keeping reviews efficient. **PR chat**: When you mention CodeReviewr in a PR comment, it reads relevant files and answers your question about the PR or codebase. File reads during chat are visible in your dashboard. ## How it works During a review, the AI analyzes the PR diff first. If it needs context, it gathers relevant information from your codebase. All access happens through GitHub's API—read-only, no filesystem changes. ## Current limitations CodeReviewr can't search for patterns across the entire codebase yet, follow symbol references, or check git history. These features are planned based on developer needs. If you need specific capabilities, reach out. Developer feedback drives our roadmap. ## Pricing and plans URL: https://codereviewr.app/docs/pricing Description: Understanding CodeReviewr's plans — Free and Paid CodeReviewr charges based on active developers, not seats. You only pay for team members who actually open pull requests in a given month. ## Plans ### Free — $0/month No credit card required. - **5 PRs/month** on private repos - **25 PRs/month** on public repos - All features included (reviews, chat, security scanning) - Counters are independent — private and public PRs don't share limits ### Paid — $8/month Remove the limits and scale with your team. - Unlimited PRs - 1 active developer included - All features included - Cancel anytime **Team add-on:** Each additional active developer beyond the first adds $12 that month, charged automatically via a one-off invoice — no plan change needed. Bots are always excluded. ## What counts as an active developer? An active developer is any developer who opens at least one PR on a repo with CodeReviewr installed during the billing month. Developers who don't open a PR are not charged. Accounts ending in `[bot]` — such as `dependabot[bot]`, `renovate[bot]`, and `github-actions[bot]` — are automatically excluded from billing. ## Open source The free tier includes 25 PR reviews/month on public repos at no cost. If you maintain an active open source project and need higher limits, [contact us](https://codereviewr.app/contact). ## Cancellation Cancel your subscription at any time. Your workspace reverts to the free tier (5 private PRs/month, 25 public PRs/month). ## Troubleshooting common issues URL: https://codereviewr.app/docs/troubleshooting Description: Solutions to common problems and answers to frequently asked questions Here's how to diagnose and fix common issues with CodeReviewr. ## Reviews aren't triggering **Check these first**: 1. **GitHub App installation**: Go to Settings. Is the GitHub App installed? 2. **Repository access**: In GitHub, check App installation settings. Is the repository included? 3. **Webhook delivery**: In GitHub, go to Settings → Webhooks. Is the CodeReviewr webhook configured and delivering successfully? **If installation looks correct**: - **Draft PRs**: CodeReviewr reviews draft PRs and also generates an AI title and description for them. If a draft PR isn't being reviewed, check webhook delivery rather than draft status. - **Large PRs**: PRs over 300KB are skipped. CodeReviewr posts a comment explaining this. Split your PR into smaller chunks. - **Branch protection**: Some branch protection rules can interfere with webhook delivery. **Still not working?** Check the workspace dashboard for error logs. Failed reviews show up there with error messages. ## Reviews are slow **Normal delays**: - Large PRs take 1-3 minutes - Complex code requiring extensive context takes longer - During peak usage, there can be 30-60 second delays **Unusual delays**: If reviews consistently take 5+ minutes for small PRs, check the dashboard for queue status. If all reviews are slow, it might be a system-wide issue. ## Issues aren't posted to GitHub **Common causes**: - **GitHub permissions**: The App might not have permission to post comments. Re-install the GitHub App and ensure all permissions are granted. - **PR is locked**: Some repos lock PRs from external comments. Check PR settings. - **Branch protection**: Comments might be blocked by branch protection rules. **Check the dashboard**: Look at the review details. Does it show issues found but you don't see comments? Error logs explain why. ## Duplicate issues CodeReviewr tracks issues and marks them solved when fixed. If you're seeing duplicates, check if they're actually the same (same file, same line, same description). If line numbers shifted, they're different issues. When commits are force-pushed or rebased, incremental reviews can sometimes get confused. If you see exact duplicates (same issue ID reported twice), that's a bug—contact support. ## Review quality issues CodeReviewr uses AI, which isn't perfect: - **False positives**: Sometimes the AI flags things that aren't actually problems. Treat AI reviews as helpful suggestions, not authoritative verdicts. - **Missed issues**: Sometimes obvious bugs aren't caught. AI isn't a replacement for human review, it's a supplement. - **Context confusion**: If your codebase has unusual patterns, the AI might misunderstand and suggest wrong fixes. **What you can do**: Provide more context in PR descriptions, keep PRs focused, and report quality issues through the contact form. We continuously improve based on feedback. AI code review catches most issues, but not 100%. Use it as a first pass, then add human review for critical code paths. ## CLI issues ### "Not authenticated" error You need to set your API token before using the CLI: ```bash codereviewr auth login cr_your_token_here ``` Get your token from the [CodeReviewr dashboard](https://codereviewr.app) under Settings → API Tokens. ### Invalid token format Tokens must start with `cr_`. If you're getting "Invalid token format" errors, double-check you copied the full token from the dashboard. ### "Not a git repository" error The CLI must be run from within a git repository. Navigate to your project root (where `.git` is located) before running commands. ### Network or API errors If you're getting connection errors: 1. Check your internet connection 2. Verify the API is accessible: `curl https://codereviewr.app/api/health` 3. If using a custom URL (`set-url`), verify it's correct with `codereviewr auth status` ### Debugging CLI issues Use verbose mode to see detailed debug output: ```bash codereviewr review --verbose ``` This shows which files are being read, API requests, and iteration counts—helpful for diagnosing slow reviews or unexpected behavior. ### Review not completing If reviews consistently hit the maximum iterations limit, your codebase may need more context than the default allows. Increase the limit: ```bash codereviewr auth set-max-iterations 50 ``` ## Getting help If none of these solutions work, check your workspace dashboard for error logs and review details. You can also check GitHub for webhook delivery status and App permissions. Contact support through the contact form on the website. Include workspace ID, PR link if applicable, error messages, and steps to reproduce. We respond to every support request, usually within 24 hours. ## Best practices for effective code reviews URL: https://codereviewr.app/docs/best-practices Description: How to structure your code changes to get better, faster, and more cost-effective AI reviews Well-structured PRs get better reviews, faster reviews, and cheaper reviews. Here's how to optimize your code changes for maximum review effectiveness. ## Keep PRs focused and small One logical change per PR. If you can't explain it in one sentence, it's too big. **Why it matters**: Small PRs are easier to understand, review faster, and cost less (fewer files read for context). **What "small" means**: - **50-200 lines**: Ideal. Reviewers understand the full change in one pass. - **200-500 lines**: Acceptable. Might need some file reading for context. - **500-1000 lines**: Getting large. Expect slower reviews and higher costs. - **1000+ lines**: Too big. Split this up. **Example**: Adding user authentication? Split it into separate PRs for login, registration, password reset, and session management. Each PR is reviewable independently. ## Write descriptive PR titles and descriptions PR title summarizes the change. Description explains the why, not just the what. CodeReviewr reads your PR description for context. It helps the AI understand intent, catch edge cases, and provide better feedback. **Good PR title**: "Add rate limiting to API endpoints" **Good PR description**: ``` Implements rate limiting using token bucket algorithm to prevent abuse. - Rate limit: 100 requests per minute per IP - 429 status code when limit exceeded - Configurable per endpoint Addresses issue #42 where API was vulnerable to brute force attacks. ``` The description helps reviewers understand why you're making the change, what approach you took, and important details. ## Keep related changes together Separate PRs for different concerns: 1. Database migrations → PR 1 2. Backend code using new schema → PR 2 (depends on PR 1) 3. Frontend code using new API → PR 3 (depends on PR 2) Mixing concerns makes reviews harder and more expensive. Reviewers need to context-switch between frontend patterns, backend logic, and database schema. ## Avoid massive refactors Refactoring and new features don't mix. Do one at a time. **Bad approach**: One PR with refactoring + new feature. 80 files changed, impossible to review safely. **Good approach**: - PR 1: Refactor payment code (no new features, just cleanup) - PR 2: Add new payment method (uses refactored code) PR 1 is easy to verify (tests still pass). PR 2 is easy to review (just the new feature). ## Write self-documenting code Code should explain itself. Use clear names, avoid magic numbers, add comments for complex logic. **Good code**: ```typescript const MAX_LOGIN_ATTEMPTS = 5; const LOCKOUT_DURATION_MINUTES = 15; if (failedLoginAttempts >= MAX_LOGIN_ATTEMPTS) { lockAccount(userId, LOCKOUT_DURATION_MINUTES); } ``` **Bad code**: ```typescript if (x >= 5) { doThing(userId, 15); } ``` Clear code helps reviewers understand intent and catch issues faster. ## Respond to review feedback promptly Address feedback, push fixes, and move forward. CodeReviewr does incremental reviews—when you push new commits, it reviews only what changed. **Workflow**: 1. Get initial review 2. Fix issues, push commits (within hours, not days) 3. Get incremental review confirming fixes 4. Merge Fast cycles keep context fresh and issues resolved quickly. ## Know when to ignore feedback Not every comment requires a change. Some feedback is subjective or doesn't fit your codebase. **Act on**: Security vulnerabilities, actual bugs, performance problems, clear best practice violations. **Consider ignoring**: Style preferences that conflict with your codebase, suggestions that don't fit your architecture, warnings about patterns you're intentionally using. AI reviewers are tools, not oracles. Use your judgment. ## Cost optimization tips On the paid plan, your monthly cost is determined by the number of active developers — not by how many PRs or tokens you use. Unlimited PRs are included. **Keep PRs focused**: Single-purpose PRs get better reviews regardless of your plan. Smaller, focused changes are easier for both AI and human reviewers to reason about. **Monitor activity**: Check your dashboard to see which developers were active each month. Understanding your team's activity patterns helps you predict your bill. **Use the free tier for evaluation**: The free tier includes 5 private PRs and 25 public PRs per month — enough to evaluate CodeReviewr with real code before committing to a plan. The goal isn't to minimize costs at all costs—it's to write better code that happens to be well-structured and easy to review. ## The bottom line Good PR hygiene helps you ship better code faster. Well-structured PRs get better reviews, faster reviews, and lower costs. Start small. Pick one practice (maybe "keep PRs under 500 lines") and focus on that. Once it's a habit, add another. Your PRs will improve—and your reviews will be faster, cheaper, and more valuable. --- # Blog ## Teaching LLMs to Stop Wasting Tokens URL: https://codereviewr.app/blog/teach-llms-to-stop-wasting-tokens Date: 2026-01-19 Description: How we reduced tool calls by 45% using a simple credit budget system instead of prompt engineering. LLMs respond better to scarce resources than vague instructions. # Teaching LLMs to Stop Wasting Tokens An LLM with tool access is like a junior developer with sudo privileges and zero impulse control. Ask them to find a string in a codebase, and they'll happily read 47 entire files when a single grep would do. We experienced this firsthand. Our agent was burning through tokens reading every file in sight, even when a quick scan would probably answer the question. The LLM was horribly inefficient. And inefficiency at scale is horribly expensive. ## The budget system Instead of trying to prompt our way out of this (which we tried, didn't work), we added a simple constraint: **tools now cost credits.** Every tool has a cost: - `grep` or `list_directory`: 1 credit - `read_file`: 5-10 credits depending on size - `parse_ast`: 15 credits - `run_tests`: 20 credits The LLM starts each task with a budget of about 100 credits. Every tool call decrements it. When the budget runs low, it has to think harder about what it actually needs. It became even more useful (and cool) when we gave it an `extend_budget` tool. It can request more credits, but it has to justify *why* and by *how much*. No handwaving. Specific reasoning or the request gets denied. **Example:** ``` Tool: extend_budget Reasoning: "Need to read 3 config files (30 credits) to trace the authentication flow across modules. Current approach of grepping found the entry point but not the full chain." Requested: 30 credits ``` ## The results **45% reduction in tool calls** across 100+ code reviews. Zero drop in accuracy. The LLM just got more strategic. It greps before reading. It lists directories before parsing. It "thinks". The budget extensions are fascinating too. About 12% of tasks request them, and 90% of those requests are legitimate (complex refactoring reviews, architectural analysis). We're now using the 10% that aren't to tune the default budget. ## What didn't work We tried prompt engineering first: "Only use tools when necessary," "Prefer cheaper tools," "Think before reading files." The LLM nodded politely and kept reading everything. We tried fixed limits: "Maximum 10 tool calls per task." It hit the limit every time, even on trivial reviews. LLMs just respond better to scarce resources than vague instructions. Who knew? (Anyone who's worked with humans, probably.) ## What's next We're experimenting with dynamic budgets based on task complexity. Give bigger credit allocations to large or complex PRs. If you're building agents with tool access and watching your bills climb, try adding a budget. It's surprisingly effective at teaching LLMs to be less... enthusiastic. --- *Building CodeReviewr, an AI code review tool that charges per token instead of per developer. Turns out we're pretty motivated to optimize token usage.* ## Optimizing for Token Efficiency URL: https://codereviewr.app/blog/token-efficiency Date: 2026-01-02 Description: At CodeReviewr, we're building the most token-efficient code review tool on the market. Here's our roadmap to reducing token usage by 70% while maintaining expert-level reviews. # Optimizing for Token Efficiency At **CodeReviewr**, our business model is simple: you pay for what you use, calculated by the token. Conventional business logic suggests we should encourage you to use *more* tokens. However, we believe the opposite. Our goal isn't to maximize token volume; it's to maximize **value**. A tool that burns through your budget with redundant data isn't a reliable partner. Here is how we are working to make CodeReviewr the most token-efficient developer tool on the market today. --- ## Transparency First We don't believe in "black box" billing. Every time our agent performs a review, we provide clear insights into your token consumption: * **Where** tokens were used (System prompts vs. code context vs. output). * **Why** they were used (Which files required the most context?). * **How** you can optimize your own configurations to lean out the process. By giving you the data, we empower you to see exactly what you’re paying for. --- ## Roadmap to 70% Efficiency We are obsessed with optimization. Over the next year, we are rolling out a series of updates designed to **decrease token usage by as much as 70%** while maintaining the expert-level code reviews you expect. Here is the roadmap for how we’ll get there: ### 1. Intelligent Caching Why pay to process the same "boilerplate" or library code twice? We are implementing sophisticated caching layers that recognize recurring context, ensuring you only pay for the *new* logic being introduced. ### 2. Codebase Indexing & Semantic Retrieval Instead of dumping entire files into a prompt, we are building a specialized knowledgebase for your codebase. By indexing your project, our agent can "search" for the exact context it needs, retrieving only the relevant snippets rather than the entire directory. This also enables us to implement semantic pattern matching to avoid duplicate code reviews. ### 3. Expanded Agent Tooling We are giving our agent better "eyes and ears." By providing it with tools to query specific functions or file structures on demand, it can find answers quickly and precisely, rather than requiring massive context windows to be passed upfront. ### 4. Diff Pre-processing Code diffs are often "noisy." Our upcoming **Diff Pre-processing** engine will strip away irrelevant metadata and non-functional changes before they ever hit the LLM, significantly reducing input tokens. ### 5. Prompt Compression We are applying advanced linguistic compression to our system prompts. By removing redundancy and using "token-dense" instructions, we can achieve the same logic with a fraction of the character count. ### 6. The Triage Model Not every code change requires a "Frontier-class" model. We are developing a triage system that uses smaller, faster models to handle simple tasks (like syntax or style checks) and reserves the heavy-hitting, expensive models for complex architectural logic. --- ## Looking Forward Efficiency is a technical challenge we are excited to solve. As we roll out these features throughout the year, we will be publishing a series of technical deep-dives into each technique mentioned above. We want CodeReviewr to be the smartest tool in your stack and the most responsible one for your bottom line. ## You're vibe-coding alone at 2 AM. But who's reviewing your code? URL: https://codereviewr.app/blog/vibe-coding-flow Date: 2025-12-01 Description: AI makes you code faster, but not better. Learn why solo developers using AI tools need code review more than ever, and how to make it work with your vibe-coding workflow. # You're vibe-coding alone at 2 AM. But who's reviewing your code? You know the moment. The AI generated exactly what you needed. The feature works. You deploy right away because, let's be honest, you're flying solo and there's nobody else to ask. Three weeks later, you're debugging a production issue that could've been caught in 30 seconds of review. This isn't a judgement, we've all done it. When you're building solo or moving fast with AI coding tools, code review feels like an unnecessary speed bump. ## Vibe-coding "Vibe-coding" is what we call that flow state where you're building fast, often with AI assistance. Tools like [Replit](https://replit.com), [Lovable](https://lovable.dev), [Cursor](https://cursor.com), and [Bolt](https://bolt.new) let you go from idea to deployed feature in minutes. [GitHub Copilot](https://github.com/features/copilot) writes half your functions. [Claude](https://claude.ai) and [ChatGPT](https://chat.openai.com) scaffold entire components. It's incredible. It's also dangerous. Here's the thing: **AI makes you code faster, but it doesn't make you code better.** We analyzed PRs from solo developers using AI coding tools. The pattern was consistent: - 40% more subtle bugs (off-by-one errors, incorrect null checks, misunderstood business logic) - 60% more "works but shouldn't be in production" code The AI gives you what you asked for. It doesn't know that your authentication check is incomplete or that you forgot to handle the edge case where the array is empty. ## Why solo developers skip code review The mental math makes sense at first: - Creating a PR for yourself feels performative - You wrote the code 10 minutes ago—you know what it does - Slowing down breaks the vibe So you merge to main, ship it, and deal with consequences later. **Consequences look like this:** - That 5-minute feature takes 2 hours to debug in production - You spend Saturday morning fixing an issue you would've caught Thursday afternoon - Your side project accumulates so much tech debt you abandon it - You can't onboard contributors because the codebase is a mystery even to you One founder told us they spent $3,000 fixing a payment bug that made it to production. The bug? A remaining Stripe test token hidden in the checkout flow. Something that would've been flagged instantly (and has been) by any code review tool. They were "moving too fast for process." ## PRs are your safety net Even when you're the only developer, pull requests serve three critical functions: **1. They force you to explain your changes** Writing a PR description makes you articulate *why* you made these changes. Future you (next week, next month, next year) will be incredibly grateful for this context. "Fixed bug" is useless. "Fixed race condition in WebSocket reconnection logic by adding mutex lock" is searchable, understandable, and debuggable. **2. They create reviewable chunks instead of commit soup** Look at your commit history honestly. It probably looks like: - "wip" - "fix" - "actually fix" - "god dammit" - "this time for real" PRs let you squash that into coherent, reviewable units of work. Each PR represents one logical change. Your Git history becomes documentation instead of a crime scene. **3. They catch bugs before they're your problem** Running automated review on a PR is your last line of defense. We tested this with 50 solo developers over 3 months. Developers who used PRs + automated review caught 70% of bugs before deploy. Developers who committed directly to main? They spent 3x more time on bug fixes and rollbacks. ## How CodeReviewr fits vibe-coding workflows We built CodeReviewr specifically for developers who couldn't justify traditional code review tools. That means two things: **You pay per review, not per month.** If you're vibe-coding on weekends and ship 10 PRs a month, you pay ~$1.50. Not $15. Not $180/year. Just $1.50. Most months you'll stay inside the $5 free credit we give everyone. **Setup is literally 60 seconds.** Connect GitHub with OAuth. Done. No configuration files, no team setup. The next PR you open gets reviewed automatically. Works with any repo, any framework, any language. This matters for vibe-coding because **friction kills momentum**. If setting up code review takes 30 minutes, you won't do it. If it takes 1 minute, you might. ## It works with whatever you're already using [Replit](https://replit.com), [Lovable](https://lovable.dev), [Bolt](https://bolt.new), [Cursor](https://cursor.com), [v0](https://v0.dev) -- if it pushes to GitHub, CodeReviewr works with it. You don't change your workflow or learn new tools. Here's what that looks like: **1. You're building in [Replit](https://replit.com):** - Connect your Replit project to GitHub (they have native integration) - Enable branch protection so changes require PRs - Work in Replit like normal - When you're ready to ship, create a PR from Replit or GitHub - CodeReviewr automatically reviews it - Merge **2. You're using [Lovable](https://lovable.dev) or [Bolt](https://bolt.new):** - These tools generate code and push to your GitHub repo - Create a development branch for experimentation - When a feature is done, open a PR to main - CodeReviewr automatically reviews it - Merge **3. You're using [Cursor](https://cursor.com) or [Copilot](https://github.com/features/copilot) locally:** - Same workflow you already have - Push to a feature branch - Open PR - CodeReviewr automatically reviews it - Merge The pattern is identical: **whatever pushes to GitHub can trigger a review.** ## What CodeReviewr actually catches We're not trying to replace your judgment. We're trying to catch the stuff you miss when you're moving fast: - Type errors and null reference bugs (the classics) - Functions that do too much (complexity warnings) - Unused imports and dead code (cleanup you meant to do) - Security issues like exposed secrets or SQL injection risks - Inconsistent naming or patterns (helps with future maintenance) - Functions that are too complex (complexity warnings) - Files that are too large (size warnings) We analyzed our first 300+ reviews and found the most common catches were: - **Missing error handling** - **Incorrect null checks** - **Unused variables and imports** - **Overly complex functions** - **Security issues** None of these are glamorous. All of them cause production bugs. ## The vibe-coding workflow that actually works After talking to hundreds of solo developers, here's the pattern that sticks: 1. **Build however you want** ([Replit](https://replit.com), [Lovable](https://lovable.dev), [Cursor](https://cursor.com), raw vim, whatever) 3. **Open PRs when features are "done"** (not when they're perfect) 4. **Let automated review catch the obvious stuff** (2-3 minutes) 5. **Fix the issues** (often automatically) 6. **Merge and deploy** Total added time: ~5 minutes per feature. Time saved in debugging: 30-120 minutes per bug caught. ## Try it today (free credits included) We give everyone $5 in free credits (~10-30 reviews depending on code complexity). For most developers, that's 1-2 months of free usage. No credit card required. Connect GitHub, open a PR, see what we catch. If you're vibe-coding and shipping fast, you don't need process for process's sake. But you do need something watching for the bugs you miss when you're in flow state. That's what we built this for. We're continuing to improve detection for AI-generated code patterns based on feedback from developers using [Cursor](https://cursor.com), [Replit](https://replit.com), and similar tools. If you catch something we miss, let us know—we're building this with you, not at you. ## Pull Requests That Don't Waste Your Time URL: https://codereviewr.app/blog/pull-requests-that-dont-waste-your-time Date: 2025-11-03 Description: Solo developers and small teams struggle with PR overhead. Here's how to optimize PRs for speed and quality—and why AI review requires different thinking than AI code generation. # Pull Requests That Don't Waste Your Time You just spent three hours implementing a feature. Now you're staring at the "Create Pull Request" button, weighing whether the overhead is worth it. For solo developers, PRs feel like process theater. You're reviewing your own code. You're approving your own work. The entire workflow was designed for teams with external reviewers, not for someone working alone at 11 PM on a side project. But here's what changed: AI code review tools now offer a legitimate second pair of eyes for solo developers. The catch? These same AI tools that help you write code faster are also making codebases less stable. Unless you change how you work. The question isn't whether to use PRs. It's how to use them to catch the problems AI-assisted development introduces, without process overhead killing your momentum. ## The size paradox Here's something counterintuitive from analyzing pull request patterns: teams that consistently ship smaller PRs end up shipping more total code than teams that batch changes into large PRs. Not more code per PR. More code overall. Smaller PRs merge faster because they're easier to review. They get caught in "waiting for review" state less often. They have lower revert rates because focused changes are easier to test thoroughly. And they generate more substantive feedback—when reviewers see 50 lines, they actually review. When they see 500 lines, they skim. The practical sweet spot is 25-100 lines per PR. Beyond 400 lines, effective review becomes nearly impossible—whether you're reviewing manually or using AI tools. Why does size matter specifically for AI code review? AI review tools excel at pattern matching and anomaly detection. Feed them a 500-line PR that mixes refactoring, feature work, and bug fixes, and the signal-to-noise ratio collapses. They'll catch syntax errors but miss the architectural problems buried in the complexity. Feed them a focused 50-line feature addition, and they can analyze control flow, identify edge cases you missed, and suggest optimizations specific to that exact problem. **The insight:** Small PRs don't slow development. They accelerate it by enabling higher-quality feedback loops. Human or AI. ## When PRs are worth it (and when they're not) Let's be honest about the tradeoffs. Many PRs receive zero substantive comments. Either the code is obviously correct, the reviewer is rubber-stamping, or it's an automated merge. For solo developers, that last category is common. You're not getting fresh eyes. You're getting documentation, CI/CD automation, and potentially AI analysis. That's still valuable, but not for everything. **PRs make sense when:** - The change is complex enough that a 10-minute review saves a 60-minute debugging session later - You're touching critical paths (authentication, payment, data deletion) - The feature needs documentation anyway - CI/CD checks provide value (tests, linting, security scans) - You want AI review to catch patterns you habitually miss **PRs waste time when:** - Fixing a typo in documentation - Updating dependencies with no code changes - Making tiny CSS tweaks that need rapid iteration - Working on throwaway prototype code We've seen developers create PRs for literal one-line changes because "that's the process." But process without value is waste. If reviewing the change takes longer than making the change, skip the PR. If the consequences of bugs outweigh the overhead, create the PR. ## The AI generation problem (and why AI review is different) Here's the truth about AI-assisted coding: it's making some codebases worse, not better. A 2024 Uplevel study tracking 800 developers found that those using GitHub Copilot introduced 41% more bugs than their peers. Google's DORA research shows that every 25% increase in AI adoption correlates with a 7.2% reduction in delivery stability. **Developers treat AI as authority rather than assistant.** When Copilot suggests code, it arrives with the confidence of autocomplete. No uncertainty markers. No caveats. This triggers a cognitive shortcut: "the machine suggested it, so it must be right." **AI makes writing code feel effortless, so developers write more code without thinking more.** The constraint that forced you to pause—typing—is gone. You can generate a hundred lines in seconds. That psychological speed removes the natural moment of reflection. **Developers review AI-generated code less critically than human-written code.** When a colleague writes code, you question it. When AI writes code, many developers just verify it runs. The code *looks* professional, so it passes the sniff test even when it shouldn't. ### Why AI review works differently AI code review doesn't have these problems because the incentive structure is inverted. When AI generates code, you want to accept it quickly and move forward. Speed is the entire point. When AI reviews code, you're not in a rush. You're in quality-check mode. Skepticism is built into the process. This creates a natural check-and-balance: AI generation optimizes for velocity, AI review optimizes for correctness. They work in opposite directions, which is exactly what you need. But this only works if you structure your workflow correctly. ## Optimizing PRs for AI review AI code review tools need three things to be useful: context, constraints, and clear success criteria. **Structure your PR description for machines, not just humans:** ```markdown ## Context Implementing user profile search with fuzzy matching ## Security considerations - User input is sanitized before Postgres query - Rate limiting: 10 requests/minute per user - Results exclude soft-deleted users ## Edge cases tested - Empty search queries - Special characters in names - Unicode/emoji in profiles - Pagination with deleted users ``` This isn't for you. This is explicitly telling the AI what to look for. Generic descriptions like "Add search feature" generate generic feedback. Specific context generates specific analysis. **Keep PRs focused and single-purpose.** AI tools struggle with mixed intent. A PR titled "Update user service" that includes refactoring, new features, and bug fixes confuses pattern analysis. Three separate PRs, each optimized for its specific change type, produce dramatically better feedback. **Use feature flags for incomplete work.** Don't batch five related changes into one massive PR. Ship incrementally behind flags. Each small PR gets thorough AI review. The big-batch approach gets skimmed—by humans and AI alike. ### The honest assessment Will optimized PRs with AI review transform your development workflow? For some developers, absolutely. If you're working on side projects without code review, AI fills a legitimate gap. If you're a small team shipping frequently, the cost per review makes financial sense. If you're using AI code generation heavily (Copilot, Cursor, etc.), AI review acts as a necessary counterbalance. For others, not really. If your code is simple CRUD without edge cases, AI won't find much. If you're already extremely disciplined about self-review, the marginal improvement may not justify the workflow change. If you're working on throwaway prototypes, skip the ceremony. Here's the nuance that matters: **AI review is most valuable when you're using AI generation.** If you're writing every line manually, pausing to think between each function, your code already has the benefit of human deliberation. AI review might catch a few things, but you're not in the high-risk category. If you're accepting Copilot suggestions rapidly, generating boilerplate with ChatGPT, or using Cursor to scaffold entire features, you're in the danger zone. You're optimizing for speed over correctness. AI review becomes your safety net. The value proposition isn't "AI makes code review better." It's "AI code generation creates new risks, and AI code review is the least expensive way to mitigate them." ## What we're building We built CodeReviewr because we kept paying $30/month for code review tools we used twice a week. The per-seat model didn't match our usage pattern. So we made usage-based pricing work: no subscriptions, no seat licenses, just pay-per-token pricing. But the valuable insight isn't "use our product." It's **structure changes for review before you need review**. Keep PRs under 100 lines when possible. Write descriptions that explain the why, not just the what. Add explicit context about edge cases and security considerations. Be more skeptical of AI-generated code than hand-written code. Do that consistently, and whether you use AI review (CodeReviewr, CodeRabbit, Qodo), human review, or just disciplined self-review, your code quality improves. The discipline matters more than the tooling. Here's to shipping quality code without wasting time on process theater. ---- If usage-based pricing appeals to you, try CodeReviewr. If you have thoughts on what makes AI code review actually useful versus performative, we're listening at [hey@codereviewr.app](mailto:hey@codereviewr.app). ## Why We're Building CodeReviewr URL: https://codereviewr.app/blog/why-we-are-building-codereviewr Date: 2025-10-29 Description: Per-seat pricing is broken for solo developers and small teams. Here's why we're building an AI code review tool that charges per token instead of per subscription. # Why We're Building CodeReviewr Last month, I tallied up all my developer tool subscriptions. Cursor. Linear. Vercel. Claude. Sentry. Postman. CloudWatch. Datadog. And of course a code review tool I'd been paying $30/month for despite only using it two times a week on my side projects. **The total? $240 per month. $2,880 per year.** Here's what bothered me most: My code review tool charged per seat, per month, regardless of how much I actually used it. I was effectively paying $7.50 per review for something that could have cost me less than $1 if I only paid for what I used. That's when we started asking: What if developer tools stopped charging for access and started charging for value? ## The per-seat prison The AI code review market is booming. It's valued at $750 million in 2025, growing at 9.2% annually, and 76% of developers are using or planning to use AI tools. The demand is clear. The technology works. The market is validated. So why are developers increasingly frustrated? CodeRabbit charges $12-30 per month per seat. GitHub Copilot: $10-39. Qodo: $30-45. Bito: $15-25. Every major player uses the same model: per-seat, per-month, regardless of usage. For enterprise teams doing hundreds of reviews daily, this makes sense. But for the rest of us? **Sarah**, a freelance developer managing three client projects, does maybe 15 code reviews per month across all projects. She pays $30/month, which works out to **$2 per review** for a tool she uses sporadically. **Marcus** runs a two-person startup. His team fluctuates between intense sprint weeks (30 reviews) and quiet planning weeks (3 reviews). He still pays **$60/month** whether they ship daily or barely at all. **The indie hacker** working nights and weekends on a SaaS product does maybe 10 reviews per month. Paying $180-360 per year for a tool they use twice a week feels wasteful, so they just skip AI code review entirely. There are 180,830 freelance software developers in the United States alone. 50% of all developers work in teams of 2-7 people. This isn't a niche. **This is half the developer market.** ## The math that makes no sense 90% of professional teams employ code review procedures, but frequency varies wildly. Some teams review multiple times daily. Others review once or twice a week. Side projects? Maybe 5-10 times per month. Research shows that 8 out of 10 companies using per-user pricing should be using a different value metric, simply because their products don't provide more value with additional users. Code review tools are the perfect example. The value isn't in adding more seats. It's in the quality and speed of each review. Yet every major tool punishes small teams and solo developers with per-seat pricing that doesn't reflect actual value delivered. The average person now juggles 12 active subscriptions, and subscription fatigue is real enough that it's spawned an entire industry projected to reach $9.8 billion by 2034. For developers specifically, tool subscription overload has become a running joke and a genuine financial burden. ## Our competitors are killing it (that's the point) Our competitors are wildly successful. CodeRabbit just raised a $60 million Series B, bringing their total to $88 million. They serve over 8,000 paying customers, including Fortune 100 companies. Qodo raised $50 million. These aren't struggling startups. They're validated, well-funded, growing companies. **This isn't a criticism. This is validation.** It proves that AI-powered code review is valuable, that developers will pay for quality tools, and that there's a massive market here. CodeRabbit is excellent for teams with daily review needs and budget. Qodo's advanced testing features serve a real need. GitHub Copilot's integration is unmatched if you're in the Microsoft ecosystem. But here's what the funding and success metrics hide: **They're all optimizing for the same customer.** Every major player targets teams with consistent, high-volume review needs, budget for monthly SaaS commitments, 5+ developers (or willingness to pay for unused seats), and predictable, stable workloads. Who's left out? Developers who work solo or in pairs. Developers with variable review volumes. Developers building side projects and open source. Bootstrapped teams. Anyone who wants to try AI code review without monthly commitment. 76% of startups are self-funded. US freelancers increased 90% between 2020-2024. Startup team sizes dropped 45% from 6.4 employees (2022) to 3.5 employees (2024). The market is moving toward smaller, leaner teams. Pricing models haven't adapted. ## Usage patterns tell a different story We analyzed our own behavior and talked to developers in our target market. **The sporadic user**: 5-20 reviews per month, pays $180-360 per year in subscriptions, gets $30-80 worth of value. **The burst worker**: Intense periods (50 reviews/month) followed by quiet periods (5 reviews/month). Pays the same monthly rate regardless, averaging $5-15 per review. **The side project builder**: Codes on weekends, 8-12 reviews per month. Cancels subscription when not actively coding, then re-subscribes. Friction kills consistency. The pattern is clear: **Usage doesn't match subscription cadence.** First renewal rates dropped 13% for annual subscriptions in 2023. 70% of consumers value the ability to pause or modify subscriptions without hassle. Developers are actively looking for alternatives to subscription overload. ## The fairness problem There's another issue we couldn't ignore: geographic inequality. We found discussions from developers around the world reacting to GitHub Copilot's pricing: Indonesia: "$10 can be used for 1 week's food" Vietnam: "$10 = 250,000d, about 8 days of prepared lunch" Turkey: "280 TL there should be regional pricing like Netflix, Steam games" When we charge flat global rates, we're saying a developer in Jakarta should pay the same as a developer in San Francisco despite a 10x difference in purchasing power. Netflix and Steam figured this out years ago. But beyond geography, there's a fundamental fairness question: **Why should you pay for access when you should pay for value?** If we do 47 reviews for you in a month, we should charge you fairly for 47 reviews. If we do 5, you should pay for 5. You get value, you pay for that value, everyone's happy. ## Pay for reviews, not seats We built CodeReviewr around a simple principle: **charge per token, not per developer.** **No subscriptions**: Pay only when you actually use AI code review **No seat licenses**: Solo developer or 50-person team, same pricing **No premium tiers**: Every review gets the same AI, same features, same quality **Transparent pricing**: $5 free credits to start (10-30 reviews depending on complexity), then clear token-based pricing Example: 47 reviews in a month costs $7. Not $180-360 like per-seat competitors. ### Why tokens? We chose token-based pricing because it aligns with actual AI costs. Running LLMs costs scale with consumption. Our costs are variable, so our pricing should be too. It provides predictability. You can estimate costs per review based on your typical PR size. It scales naturally. Review 5 PRs or 500 PRs, pricing scales fairly with usage. It removes barriers. No minimum commitment means you can try it risk-free. Is this perfect? No. Usage-based pricing has challenges. Developers worry about cost unpredictability and bill shock. We're addressing this with clear cost estimation, usage alerts, and transparent pricing. The alternative, continuing to charge per seat regardless of usage, felt fundamentally unfair. ## What we're building CodeReviewr analyzes pull requests intelligently with bug detection, code improvement suggestions, and security issues. It integrates with GitHub seamlessly via OAuth in under 60 seconds. It provides actionable feedback, not nitpicky style comments, but meaningful insights. It works for any team size with no seat minimums. But beyond features, we're building something else: **a tool that respects your financial reality.** We know you're juggling subscriptions. We know budgets are tight. We know you need professional-grade code review but can't justify $30/month for sporadic use. We know you value fairness, transparency, and no-BS pricing. Code review is critical. Developers spend 2+ days per week waiting for reviews, and reviews can detect 55-65% of latent defects. But access to good tools shouldn't require breaking the bank or committing to yearly contracts. ## The philosophy behind it We're developers first, founders second. **Tools should charge for value, not access.** If we provide $10 of value, we should charge $1-2.50, not $30 just because you want to keep the option available. **Transparency builds trust.** 45% of SaaS vendors hide their pricing behind "Contact Sales," and developers universally hate this. We publish our pricing publicly. No sales calls. No hidden enterprise tiers. **Small teams deserve great tools.** The fact that you're not a 50-person VC-backed startup doesn't mean you should accept inferior tools or unfair pricing. **Usage patterns vary, pricing should too.** Some months you ship constantly. Some months you plan. Subscriptions punish this natural rhythm. **Fair pricing is good business.** If we treat you fairly, you'll stick around. If we nickel-and-dime you, you'll leave. ## Who we're building for We're not trying to replace CodeRabbit for teams that need enterprise features. We're not competing with GitHub Copilot's 20 million users. We're not going after Qodo's advanced testing capabilities. **We're building for the developers everyone else forgot.** The solo developer building a SaaS product at night. The two-person startup bootstrapping their way to profitability. The freelancer managing multiple client codebases. The indie hacker who needs professional code review but can't justify $360/year for 20 reviews per month. You're 180,000+ freelance developers in the US alone. You're 50% of all developers working in small teams. You're 52% of developers coding on side projects. You're not a niche. You're half the market. And you deserve tools that treat you fairly. ## Why now? Subscription fatigue is real and intensifying. The subscription fatigue solutions market is projected to grow from $1.3 billion to $9.8 billion by 2034. 61% of SaaS companies are testing or planning usage-based models. The industry is shifting toward consumption-based pricing because it's fundamentally fairer. Developer tools need to recognize that pricing isn't merely about revenue. It's about driving the right behaviors and adoption patterns. Developers want transparency, fairness, and flexibility. The current model leaves too many good developers without access to tools that could genuinely help them. And honestly? We were tired of paying for seats we didn't use. ## What's next CodeReviewr is here. We're ready for you. We're starting with GitHub integration, pay-per-token pricing, and $5 in free credits (approximately 10-30 reviews depending on your code complexity). No subscriptions. No seat licenses. No BS. Just fair, transparent, pay-per-use AI code review for developers who value both quality and fairness. If this resonates with you, we'd love to hear from you. What pricing models frustrate you? What would make you actually use AI code review tools? How can we build something that genuinely serves solo developers and small teams? Drop us a line at [hey@codereviewr.app](mailto:hey@codereviewr.app) or follow our journey at [codereviewr.app](https://codereviewr.app). Here's to building tools that actually work for the way we work. The CodeReviewr Team