- Add blank lines around all fenced code blocks (MD031) - Add language specifiers to all bare fenced code blocks (MD040) - Rename duplicate 'Processing Steps' heading to 'SonarCloud Processing Steps' (MD024) Closes #24
14 KiB
Error Checking and Feedback Loops
This document outlines the processes for error checking, debugging, and establishing feedback loops.
The goal is to create a seamless, autonomous CI/CD pipeline.
The AI can identify, diagnose, and fix issues with minimal human intervention.
Table of Contents
- GitHub Actions Workflow Monitoring
- Local Build and Test Feedback
- Code Quality Tool Integration
- Automated Error Resolution
- Feedback Loop Architecture
- When to Consult Humans
GitHub Actions Workflow Monitoring
Checking Workflow Status via GitHub API
AI assistants can directly monitor GitHub Actions workflows using the GitHub API.
This helps identify failures and diagnose issues:
github-api /repos/{owner}/{repo}/actions/runs
Step-by-Step Process
-
Get Recent Workflow Runs:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/actions/runs -
Filter for Failed Runs:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/actions/runs?status=failure -
Get Details for a Specific Run:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/actions/runs/{run_id} -
Get Jobs for a Workflow Run:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/actions/runs/{run_id}/jobs -
Analyze Job Logs (if accessible via API):
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/actions/jobs/{job_id}/logs
Common GitHub Actions Errors and Solutions
Missing or Outdated Action Versions
Error: Missing download info for actions/upload-artifact@v3
Solution: Update to the latest version of the action:
uses: actions/upload-artifact@v4
Port Configuration Issues for WordPress Multisite
Error: The current host is 127.0.0.1:8888, but WordPress multisites do not support custom ports.
Solution: Use port 80 for multisite environments:
npx @wp-playground/cli server --blueprint playground/multisite-blueprint.json --port 80 --login &
Artifact Path Syntax Issues
Error: Invalid path syntax for artifacts
Solution: Use multi-line format for better readability:
path: |
cypress/videos
cypress/screenshots
Concurrency Control
Problem: Redundant workflow runs when multiple commits land quickly
Solution: Add concurrency control to cancel in-progress runs:
concurrency:
group: playground-tests-${{ github.ref }}
cancel-in-progress: true
Local Build and Test Feedback
Monitoring Local Test Runs
AI assistants can monitor local test runs by analyzing the output of test commands.
PHP Unit Tests
composer run phpunit
Cypress Tests
npm run test:single
npm run test:multisite
WordPress Playground Tests
npm run test:playground:single
npm run test:playground:multisite
Capturing and Analyzing Test Output
-
Run Tests with Output Capture:
npm run test:single > test-output.log 2>&1 -
Analyze Output for Errors:
cat test-output.log | grep -i 'error\|fail\|exception' -
Parse Structured Test Results (if available):
cat cypress/results/results.json
Common Local Test Errors and Solutions
WordPress Playground Port Issues
Error: The current host is 127.0.0.1:8888, but WordPress multisites do not support custom ports.
Solution: Modify the port in the blueprint or test configuration:
{
"features": {
"networking": true
}
}
Cypress Selector Errors
Error: Timed out retrying after 4000ms: expected '<body...>' to have class 'wp-admin'
Solution: Update selectors to be more robust and handle login states:
cy.get('body').then(($body) => {
if ($body.hasClass('login')) {
cy.get('#user_login').type('admin');
cy.get('#user_pass').type('password');
cy.get('#wp-submit').click();
}
});
// Check for admin bar instead of body class
cy.get('#wpadminbar').should('exist');
Code Quality Tool Integration
Automated Code Quality Checks
AI assistants can integrate with various code quality tools to identify and fix issues.
PHPCS (PHP CodeSniffer)
composer run phpcs
ESLint (JavaScript)
npm run lint:js
Stylelint (CSS)
npm run lint:css
Parsing Code Quality Tool Output
-
Run Code Quality Check:
composer run phpcs > phpcs-output.log 2>&1 -
Analyze Output for Errors:
cat phpcs-output.log | grep -i 'ERROR\|WARNING' -
Automatically Fix Issues (when possible):
composer run phpcbf
Monitoring Code Quality Feedback in Pull Requests
Automated code quality tools often provide feedback directly in pull requests.
AI assistants can check these comments to identify and address issues.
Accessing PR Comments via GitHub API
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/pulls/{pull_number}/comments
Accessing PR Review Comments
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/pulls/{pull_number}/reviews
Checking CodeRabbit Feedback
CodeRabbit provides AI-powered code review comments via the GitHub API.
-
Get PR Comments:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/pulls/{pull_number}/comments -
Filter for CodeRabbit Comments: Look for comments from the
coderabbitaiuser. -
Parse Actionable Feedback:
- Code quality issues
- Suggested improvements
- Best practice recommendations
- Specific code snippets to fix
Checking Codacy and CodeFactor Feedback
These tools provide automated code quality checks and post results as PR comments.
-
Check PR Status Checks:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/commits/{sha}/check-runs -
Get Detailed Reports (if available via API):
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/commits/{sha}/check-runs/{id} -
Parse Common Issues:
- Code style violations
- Potential bugs
- Security vulnerabilities
- Performance issues
- Duplication
Checking SonarCloud Analysis
SonarCloud provides detailed code quality and security analysis.
-
Check SonarCloud Status:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/commits/{sha}/check-runs?check_name=SonarCloud -
Parse SonarCloud Issues:
- Code smells
- Bugs
- Vulnerabilities
- Security hotspots
- Coverage gaps
Common Code Quality Issues and Solutions
WordPress Coding Standards
Error: ERROR: Expected snake_case for function name, but found camelCase
Solution: Rename functions to follow snake_case convention:
// Before
function getPluginVersion() { ... }
// After
function get_plugin_version() { ... }
Missing Docblocks
Error: ERROR: Missing doc comment for function
Solution: Add proper docblocks:
/**
* Get the plugin version.
*
* @since 1.0.0
* @return string The plugin version.
*/
function get_plugin_version() { ... }
Automated Error Resolution
Error Resolution Workflow
- Identify Error: Use GitHub API or local test output to identify errors
- Categorize Error: Determine error type and severity
- Search for Solution: Look for patterns in known solutions
- Apply Fix: Make necessary code changes
- Verify Fix: Run tests again to confirm the issue is resolved
- Document Solution: Update documentation with the solution
Processing Code Quality Feedback
Extracting Actionable Items from PR Comments
-
Collect All Feedback:
github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/pulls/{number}/comments github-api /repos/wpallstars/wp-plugin-starter-template-for-ai-coding/pulls/{number}/reviews -
Categorize Issues:
- Critical: Security vulnerabilities, breaking bugs
- High: Code quality violations, potential bugs
- Medium: Style issues, best practices
- Low: Documentation, minor improvements
-
Prioritize Fixes:
- Address critical issues first
- Group related issues for efficient fixing
- Consider dependencies between issues
-
Create Fix Plan:
- Document files that need changes
- Outline specific changes needed
- Note any potential side effects
Responding to Code Quality Tool Comments
- Acknowledge Feedback: React to or reply to comments
- Implement Fixes: Make the necessary code changes
- Explain Changes (if needed): Add comments explaining decisions
- Request Review (if needed): Ask for re-review after fixes
Feedback Loop Architecture
Complete Feedback Loop System
Code Changes ──► Local Testing ──► GitHub Actions
│ │ │
▼ ▼ ▼
AI Assistant ◀── Error Analysis ◀── Status Check
│
▼
Fix Generation ──► Human Review (only when necessary)
Key Components
- Code Changes: Initial code modifications
- Local Testing: Immediate feedback on local environment
- GitHub Actions: Remote CI/CD pipeline validation
- Status Check: Monitoring workflow status via GitHub API
- Error Analysis: Parsing and categorizing errors
- AI Assistant: Central intelligence for error resolution
- Fix Generation: Creating and implementing solutions
- Human Review: Optional step for complex decisions
Handling Direct Feedback from Code Review Tools
Accessing and Processing CodeRabbit Feedback
CodeRabbit provides detailed AI-powered code reviews.
These can be directly accessed and processed.
Example CodeRabbit Feedback
coderabbitai bot left a comment
Actionable comments posted: 1
🧹 Nitpick comments (3)
.github/workflows/playground-tests-fix.yml (3)
9-13: Add concurrency control to avoid redundant runs.
Processing Steps
-
Extract Specific Recommendations:
- Identify file paths and line numbers
- Parse suggested code changes
- Understand the rationale for changes
-
Implement Recommendations: Apply the suggested changes
-
Verify Implementation:
- Run local tests if applicable
- Commit changes with descriptive message
- Monitor CI/CD pipeline for success
Handling SonarCloud and Codacy Feedback
These tools provide structured feedback that can be systematically addressed.
Example SonarCloud Feedback
SonarCloud Quality Gate failed
- 3 Bugs
- 5 Code Smells
- 1 Security Hotspot
SonarCloud Processing Steps
-
Access Detailed Reports:
- Use the SonarCloud API or web interface
- Categorize issues by severity and type
-
Address Issues Systematically:
- Fix bugs first
- Address security hotspots
- Resolve code smells
-
Document Resolutions:
- Note patterns of issues for future prevention
- Update coding guidelines if necessary
When to Consult Humans
While the goal is to create an autonomous system, there are scenarios where human input is necessary.
Scenarios Requiring Human Consultation
- Product Design Decisions: Features, UX, and strategic direction
- Security-Critical Changes: Changes that could impact security posture
- Architectural Decisions: Major structural changes to the codebase
- Deployment Approvals: Final approval for production releases
- Access Requirements: When additional permissions are needed
- Ambiguous Errors: When errors have multiple possible interpretations
- Novel Problems: Issues without precedent or documented solutions
- External Service Issues: Problems with third-party services
Effective Human Consultation
When consulting humans, provide:
- Clear Context: Explain what you were trying to accomplish
- Error Details: Provide specific error messages and logs
- Attempted Solutions: Document what you've already tried
- Specific Questions: Ask targeted questions rather than open-ended ones
- Recommendations: Suggest possible solutions for approval
Contributing to External Repositories
When issues are caused by bugs or missing features in external dependencies or GitHub Actions, AI assistants can contribute fixes upstream.
Workflow for External Contributions
-
Clone the Repository Locally:
cd ~/Git git clone https://github.com/owner/repo.git cd repo git checkout -b feature/descriptive-branch-name -
Make Changes and Commit:
# Make your changes git add -A git commit -m "Descriptive commit message Detailed explanation of what the change does and why. Fixes #issue-number" -
Fork and Push:
# Create a fork (if not already forked) gh repo fork owner/repo --clone=false --remote=true # Add fork as remote git remote add fork https://github.com/your-username/repo.git # Push to fork git push fork feature/descriptive-branch-name -
Create Pull Request:
gh pr create \ --repo owner/repo \ --head your-username:feature/descriptive-branch-name \ --title "Clear, descriptive title" \ --body "## Summary Description of changes... Fixes #issue-number"
Best Practices for External Contributions
- Always clone to
~/Git/for consistency - Check existing issues and PRs before starting work
- Follow the project's contribution guidelines
- Keep changes focused and minimal
- Include tests if the project has a test suite
- Reference the issue number in commits and PR description
Local Repository Management
Keep cloned repositories in ~/Git/ organized:
~/Git/wp-plugin-starter-template-for-ai-coding/- Main project~/Git/wp-performance-action/- Forked for contributions- Other cloned repos as needed
Conclusion
This error checking and feedback loop system creates a comprehensive framework for AI-driven development.
By systematically monitoring, analyzing, and resolving errors, the AI assistant can maintain high code quality.
For related workflows, refer to the other documents in the .agents/ directory.