BH3D Logo
Agentic Coding Best Practices

Stop Writing Code for Humans - The Future Belongs to AI Agents

By Ben Houston, 2025-03-05

For the past 6 weeks, I've been developing mycoder.ai, an agentic coding tool. Of course, I've been using mycoder to modify and improve itself.

The framework understands your project structure, creates GitHub issues, implements features, submits PRs, ensures your build passes, runs tests, and even validates your web app by clicking through pages and checking console logs.

When I first started using mycoder to modify itself, I watched it struggle with aspects of the codebase that humans considered "well-organized." The agent would get lost in complex directory structures, confused by re-exported modules, and bogged down by monorepo package explosions.

This led me to this realization: what's optimal for human developers, or at least what we consider to be ideal, isn't necessarily optimal for AI assistants. And as AI becomes more integrated into our development workflows, we need to adapt.

AI-First Coding Practices That Also Benefit Humans

Here's where things get interesting - the changes that make your code more AI-friendly also improve the experience for human developers, especially those new to your project:

1. Documentation as Code

Traditional approach: Keep detailed documentation in wikis, notion docs, or team knowledge bases separate from the codebase.

AI-friendly approach: Put comprehensive README.md files at the root of your project and in each package. Create a CONTRIBUTING.md document that explains coding practices, PR expectations, and test coverage requirements. If need be, add additional docs inline near the source files that require explanations or add documentation in those files that explains the goals/techniques if they are not obvious from the code.

Your AI will read these documents and follow them. This eliminates the need for custom prompts or separate AI guidelines - your project documentation becomes your AI instruction set.

Example of an AI-friendly README.md:

# Frontend Package This package contains the React frontend application. ## Directory Structure - `src/components`: Reusable UI components - `src/pages`: Page components corresponding to routes - `src/hooks`: Custom React hooks - `src/utils`: Utility functions ## Development Workflow 1. Run `npm run dev` to start the development server 2. Follow the component pattern in `src/components/Button.tsx` for new components 3. All new components must have corresponding test files ## Testing We use Jest and React Testing Library. Run tests with `npm test`.

When I provide this level of documentation, mycoder correctly follows the project conventions without additional prompting.

2. Minimize Package Explosion

Traditional approach: Create a new package for every distinct functionality to enforce separation of concerns.

AI-friendly approach: Use a minimal number of packages with clear boundaries. Only create separate packages when code needs to be shared between multiple applications.

The AI agent struggles to navigate between numerous packages, just as new team members do. Each package adds cognitive overhead as the agent must understand not just the code but the relationships between packages.

Here's a comparison of project structures:

# Traditional approach with package explosion my-project/ packages/ ui-components/ data-models/ api-client/ utils/ form-validation/ state-management/ analytics/ ...and 10 more micro-packages # AI-friendly approach my-project/ packages/ frontend/ backend/ shared/

In my experience, mycoder can understand and modify the second structure in minutes, while the first one might take 5 minutes just to map all the dependencies. You can still organize your components, data-models, etc as you want in these packages in folders, just skip the unnecessary top level segmentation.

3. Simplify Project Structure

Traditional approach: Deep nested directories with many small files for maximum modularity.

AI-friendly approach: Flatter directory structures with semantically meaningful names. Put related functionality together rather than fragmenting it across many files and folders.

Example of a simplified structure:

# Traditional deep nesting src/ features/ auth/ components/ forms/ inputs/ PasswordInput.tsx EmailInput.tsx LoginForm.tsx SignupForm.tsx buttons/ SubmitButton.tsx hooks/ useAuth.ts utils/ validation.ts types/ auth.types.ts # AI-friendly approach src/ auth/ LoginForm.tsx # Contains PasswordInput, EmailInput, and SubmitButton SignupForm.tsx useAuth.ts validation.ts auth.types.ts

In tests with mycoder, the flatter structure reduced implementation time for new features.

4. Avoid Re-exports and Indirection

Traditional approach: Use index.ts files at every level to re-export components, creating a clean public API.

AI-friendly approach: Limit re-exports to the package level. Internal re-exports create indirection that makes it harder for the AI to trace dependencies.

Example of problematic re-exports:

// src/components/forms/inputs/index.ts export * from './TextInput'; export * from './NumberInput'; export * from './SelectInput'; // src/components/forms/index.ts export * from './inputs'; export * from './Form'; // src/components/index.ts export * from './forms'; export * from './buttons'; // Usage in app import { TextInput } from 'src/components';

The AI has to trace through multiple files to understand where

TextInput
is actually defined. A more direct approach:

// Direct import import { TextInput } from 'src/components/forms/inputs/TextInput';

5. Prefer Compile-Time Validation Over Runtime Checks

Traditional approach: Rely on runtime validation and testing to catch errors.

AI-friendly approach: Push as much validation as possible to compile time using strong typing and static analysis.

Consider these two approaches to defining routes and handlers:

// Runtime validation (problematic for AI) - Remix-style exports // user-route.tsx import { json } from 'remix'; // These exports are convention-based - // no type checking ensures they're correct export const loader = async ({ params }) => { // If you misspell this as 'getUserData', it fails at runtime return json(await getUserDetails(params.id)); }; export const action = async ({ request }) => { // If you forget to return json(), it fails at runtime const data = await request.formData(); return json(await updateUser(data)); }; // are you sure it is called "meta" or is it "getMeta" // or is it called "metadata", you don't know at compile time. export const meta = () => { // If you return the wrong structure, it fails at runtime return { title: 'User Details' }; }; // Compile-time validation (AI-friendly) // user-routes.tsx import { createRoute, json } from 'some-typed-router'; export const userRoute = createRoute({ path: '/users/:id', loader: async ({ params }) => { return json(await getUserDetails(params.id)); }, action: async ({ request }) => { const data = await request.formData(); return json(await updateUser(data)); }, meta: () => { return { title: 'User Details' }; } }); // TypeScript will catch errors if any of these functions // don't match expected types

I initially copied this convention-based pattern from Remix in my yargs-file-commands library. However, I quickly ran into issues with it because the AI struggled to understand the implicit connections between filenames, export names and functionality. There was no compile-time validation to ensure the exports matched the expected patterns.

I eventually refactored the library to use a typesafe declaration method patterned after TanStack Start and TanStack Router. This dramatically improved the AI's ability to understand and modify the code correctly.

When I switched to typesafe declarations with strong type checking, mycoder's success rate for route and command-related tasks improved from about 60% to nearly 100%. The AI could now rely on TypeScript to catch errors rather than having to understand implicit conventions.

6. Consolidate Linting and Formatting at the Root Level

Traditional approach: Each package in a monorepo has its own ESLint and Prettier configurations, leading to inconsistencies and maintenance overhead.

AI-friendly approach: Move all linting and formatting configuration to the root level, ensuring consistency across packages and simplifying maintenance.

This approach follows the DRY (Don't Repeat Yourself) principle and removes complexity from individual packages. The AI doesn't have to understand multiple different linting configurations - there's just one source of truth.

Here's how I've implemented this in mycoder's monorepo package.json:

{ "name": "mycoder-monorepo", [...] "scripts": { [...] "lint": "eslint . --fix", "format": "prettier . --write", }, "lint-staged": { "*.{js,jsx,ts,tsx}": [ "pnpm lint", "pnpm format" ] }, "devDependencies": { "@eslint/js": "^9", "@typescript-eslint/eslint-plugin": "^8.23.0", "@typescript-eslint/parser": "^8.23.0", "eslint": "^9.0.0", "eslint-config-prettier": "^9", "eslint-import-resolver-typescript": "^3.8.3", "eslint-plugin-import": "^2", "eslint-plugin-prettier": "^5", "eslint-plugin-promise": "^7.2.1", "eslint-plugin-unused-imports": "^4.1.4", "husky": "^9.1.7", "lint-staged": "^15.4.3", "prettier": "^3.5.1", "typescript-eslint": "^8.23.0" } }

Notice how all linting and formatting dependencies and scripts are defined at the root level. Individual packages don't need to worry about these concerns - they just focus on their specific functionality.

This approach has significantly reduced the cognitive load for mycoder when working with the monorepo. It doesn't need to understand different linting rules for different packages, and it can run lint and format commands from the root directory to ensure consistency across the entire project.

Also, you'll notice I have both lint and format as the pre-commit hooks. This forces it upon the AI without me even documenting it that these need to pass for any git commits.

7. Avoid Overly Interdependent Configuration Systems

Traditional approach: Create centralized configuration files with complex inheritance chains to maximize reuse (DRY principle taken to the extreme).

AI-friendly approach: Favor self-contained, independent configuration files for each package or module, even if it means some duplication.

Consider this example of TypeScript configuration in a monorepo:

# Overly interdependent configuration (problematic for AI) root/ tsconfig.base.json # Base configuration tsconfig.react.json # Extends base, adds React settings tsconfig.node.json # Extends base, adds Node.js settings packages/ frontend/ tsconfig.json # Extends ../../tsconfig.react.json backend/ tsconfig.json # Extends ../../tsconfig.node.json shared/ tsconfig.json # Extends ../../tsconfig.base.json

This approach creates a brittle system where modifying the root configuration files can unexpectedly break packages that depend on them. The AI has to trace through multiple files to understand the full configuration context for any single package.

A more AI-friendly approach:

# Independent configuration (AI-friendly) root/ packages/ frontend/ tsconfig.json # Complete, self-contained config backend/ tsconfig.json # Complete, self-contained config shared/ tsconfig.json # Complete, self-contained config

While this approach may duplicate some configuration, it makes each package self-contained and easier to understand. The AI can modify a package's configuration without worrying about breaking other packages or needing to trace through a complex hierarchy of includes.

This applies to other configuration types as well, such as webpack, babel, or eslint configurations. When I switched to independent configurations in my projects, mycoder could make changes more confidently without introducing unexpected side effects.

8. Type-Driven Development

Traditional approach: Use loose types or rely on implicit typing, focusing more on implementation details than type contracts.

AI-friendly approach: Define comprehensive type systems that capture the domain model and constraints, making the expected behavior explicit at the type level.

AI agents thrive when they can understand the shape and constraints of data through strong type signatures. This helps them implement functionality correctly without having to infer implicit rules.

Here's an example of how exhaustive type definitions and discriminated unions can guide AI implementation:

// Instead of loose object types type User = { id: string; name: string; role?: string; }; // Use more precise types with discriminated unions type AdminUser = { kind: 'admin'; id: string; name: string; permissions: string[]; }; type RegularUser = { kind: 'regular'; id: string; name: string; }; type User = AdminUser | RegularUser; // Function that becomes self-documenting with these types function getUserAccess(user: User): string[] { switch (user.kind) { case 'admin': return user.permissions; case 'regular': return ['read']; } }

The discriminated union pattern (using

kind
as a discriminator) makes it impossible to handle a user without checking what kind they are. This provides built-in documentation and guides the AI to implement the correct behavior for each user type.

When I started using this pattern extensively in mycoder, it dramatically reduced the number of logical errors in the AI-generated code. The AI could understand from the types exactly what was expected.

9. Consistent File Organization Patterns

Traditional approach: Organize files differently across parts of the codebase based on different developers' preferences or evolving standards.

AI-friendly approach: Use consistent, predictable patterns for file organization within components or modules.

AI models work better with consistent patterns because they can more easily predict where to find related code and how components should be structured. This consistency reduces the cognitive load for both AI and human developers.

Consider this pattern for organizing React components:

# Consistent file organization for components src/ components/ Button/ Button.tsx # Component implementation Button.test.tsx # Tests Button.types.ts # Type definitions Button.utils.ts # Helper functions Button.module.css # Scoped styles index.ts # Exports the component Modal/ Modal.tsx # Same pattern applied consistently Modal.test.tsx Modal.types.ts Modal.utils.ts Modal.module.css index.ts

This co-location helps the AI understand all aspects of a component without having to search across multiple directories. The consistent naming pattern also makes it easier for the AI to predict where specific functionality would be located.

In my experience with mycoder, this consistent organization pattern reduced the time it took for the AI to understand and modify components by about 30%. The AI could confidently predict where to find related code without having to search through the entire codebase.

10. Test-Case Driven Documentation

Traditional approach: Write documentation and tests separately, often leading to documentation that becomes outdated as the codebase evolves.

AI-friendly approach: Use well-written tests as living documentation that clearly demonstrates the expected behavior of functions and components.

Well-structured tests serve as excellent guides for AI agents because they explicitly show inputs, outputs, and expected behavior in a way that can be verified. This approach ensures that documentation stays in sync with the actual code.

Here's an example of how test cases can serve as clear documentation:

// This test clearly documents the expected behavior test('calculateDiscount applies percentage discount to eligible items', () => { // Given a cart with eligible and non-eligible items const cart = [ { id: '1', price: 100, eligibleForDiscount: true }, { id: '2', price: 50, eligibleForDiscount: false } ]; // When applying a 10% discount const result = calculateDiscount(cart, 10); // Then only eligible items are discounted expect(result).toEqual([ { id: '1', price: 90, eligibleForDiscount: true }, { id: '2', price: 50, eligibleForDiscount: false } ]); }); // Test documenting edge cases test('calculateDiscount handles zero prices and 100% discounts correctly', () => { const cart = [ { id: '1', price: 0, eligibleForDiscount: true }, { id: '2', price: 100, eligibleForDiscount: true } ]; // A 100% discount should reduce eligible prices to zero const result = calculateDiscount(cart, 100); expect(result).toEqual([ { id: '1', price: 0, eligibleForDiscount: true }, { id: '2', price: 0, eligibleForDiscount: true } ]); });

These tests not only verify the code works correctly but also serve as clear examples of how the

calculateDiscount
function should behave with different inputs. The AI can use these examples to understand the expected behavior and implement or modify the function accordingly.

When I implemented comprehensive test suites like this in mycoder, the AI's success rate in implementing new features that matched expectations increased significantly. The tests provided clear guidance on how the code should behave in various scenarios.

Lean Into AI Mistakes as Feedback

Here's a counterintuitive tip: when your AI assistant makes mistakes interpreting your codebase, don't just correct it - consider that a signal that your code organization might be confusing.

If the AI thought your file should have a different name, or expected a directory to contain something else, that might indicate that your naming or organization is counterintuitive. These AI "mistakes" often mirror the same confusion a new human developer would experience.

Use AI to Refactor Your Codebase for AI

I've found that one of the best uses of mycoder is to have it analyze and refactor my codebase to be more AI-friendly. The AI can:

  1. Summarize your project structure
  2. Write or update READMEs
  3. Suggest reorganization of files and directories
  4. Consolidate scattered configurations

This creates a virtuous cycle where the AI improves the codebase, making it easier for the AI (and humans) to work with it in the future.

The Controversial Part: Are We Coding for AI or Humans?

Here's where I expect some pushback: am I suggesting we optimize our code for AI rather than human readability?

Not exactly. The practices I'm advocating actually improve readability for humans too. But I am suggesting that as AI becomes a more active participant in development, we need to think about readability from both perspectives.

The mistake would be to create "AI-specific" documentation or code organization that differs from what we use for humans. Instead, we should merge these concerns, creating codebases that are intuitive for both human and artificial intelligence.

What do you think? Are you structuring your code with AI assistants in mind? Or do you believe we should maintain human-centric approaches regardless of how AI evolves? I'd love to hear your perspective.