Commit e096f62c authored by Jan Reimes's avatar Jan Reimes
Browse files

style(skill): standardize formatting and improve readability

* Update SKILL.md files for subagent-driven-development and systematic-debugging to use consistent formatting.
* Replace header style with a unified format for better clarity.
* Enhance documentation with additional spacing and improved structure.
* Ensure all sections are clearly defined and easy to navigate.
* Apply similar formatting changes across various debugging and development skills for consistency.
parent 2f192085
Loading
Loading
Loading
Loading
+14 −8
Original line number Diff line number Diff line
---
name: code-auditor
description: Performs comprehensive codebase analysis covering architecture, code quality, security, performance, testing, and maintainability. Use when user wants to audit code quality, identify technical debt, find security issues, assess test coverage, or get a codebase health check.
---
______________________________________________________________________

## name: code-auditor description: Performs comprehensive codebase analysis covering architecture, code quality, security, performance, testing, and maintainability. Use when user wants to audit code quality, identify technical debt, find security issues, assess test coverage, or get a codebase health check.

# Code Auditor

@@ -20,6 +19,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
## What It Analyzes

### 1. Architecture & Design

- Overall structure and organization
- Design patterns in use
- Module boundaries and separation of concerns
@@ -27,6 +27,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
- Architectural decisions and trade-offs

### 2. Code Quality

- Complexity hotspots (cyclomatic complexity)
- Code duplication (DRY violations)
- Naming conventions and consistency
@@ -34,6 +35,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
- Code smells and anti-patterns

### 3. Security

- Common vulnerabilities (OWASP Top 10)
- Input validation and sanitization
- Authentication and authorization
@@ -41,6 +43,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
- Dependency vulnerabilities

### 4. Performance

- Algorithmic complexity issues
- Database query optimization
- Memory usage patterns
@@ -48,6 +51,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
- Resource leaks

### 5. Testing

- Test coverage assessment
- Test quality and effectiveness
- Missing test scenarios
@@ -55,6 +59,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
- Integration vs unit test balance

### 6. Maintainability

- Technical debt assessment
- Coupling and cohesion
- Ease of future changes
@@ -64,10 +69,10 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
## Approach

1. **Explore** using Explore agent (thorough mode)
2. **Identify patterns** with Grep and Glob
3. **Read critical files** for detailed analysis
4. **Run static analysis tools** if available
5. **Synthesize findings** into actionable report
1. **Identify patterns** with Grep and Glob
1. **Read critical files** for detailed analysis
1. **Run static analysis tools** if available
1. **Synthesize findings** into actionable report

## Thoroughness Levels

@@ -136,6 +141,7 @@ Comprehensive codebase analysis covering architecture, code quality, security, p
## Configuration

Can focus on specific areas:

- Security-only audit
- Performance-only audit
- Testing-only assessment
+8 −6
Original line number Diff line number Diff line
---
name: code-execution
description: Execute Python code locally with marketplace API access for 90%+ token savings on bulk operations. Activates when user requests bulk operations (10+ files), complex multi-step workflows, iterative processing, or mentions efficiency/performance.
---
______________________________________________________________________

## name: code-execution description: Execute Python code locally with marketplace API access for 90%+ token savings on bulk operations. Activates when user requests bulk operations (10+ files), complex multi-step workflows, iterative processing, or mentions efficiency/performance.

# Code Execution

@@ -48,12 +47,13 @@ git.git_commit('feat: refactor code')
## Pattern

1. **Analyze locally** (metadata only, not source)
2. **Process locally** (all operations in execution)
3. **Return summary** (not data!)
1. **Process locally** (all operations in execution)
1. **Return summary** (not data!)

## Examples

**Bulk refactor (50 files):**

```python
from execution_runtime import transform
result = transform.rename_identifier('.', 'oldName', 'newName', '**/*.py')
@@ -61,6 +61,7 @@ result = transform.rename_identifier('.', 'oldName', 'newName', '**/*.py')
```

**Extract functions:**

```python
from execution_runtime import code, fs

@@ -73,6 +74,7 @@ result = {'functions_moved': len(functions)}
```

**Code audit (100 files):**

```python
from execution_runtime import code
from pathlib import Path
+5 −5
Original line number Diff line number Diff line
@@ -9,11 +9,11 @@ from api.code_transform import rename_identifier

# Rename function across all Python files
result = rename_identifier(
    pattern='.',  # Current directory
    old_name='getUserData',
    new_name='fetchUserData',
    file_pattern='**/*.py',  # All Python files recursively
    regex=False  # Exact identifier match
    pattern=".",  # Current directory
    old_name="getUserData",
    new_name="fetchUserData",
    file_pattern="**/*.py",  # All Python files recursively
    regex=False,  # Exact identifier match
)

# Result contains summary only (not all file contents!)
+15 −38
Original line number Diff line number Diff line
@@ -4,19 +4,15 @@ Example: Comprehensive Codebase Audit
Analyze code quality across entire project with minimal tokens.
"""

from api.code_analysis import analyze_dependencies, find_unused_imports
from pathlib import Path

from api.code_analysis import analyze_dependencies, find_unused_imports

# Find all Python files
files = list(Path('.').glob('**/*.py'))
files = list(Path(".").glob("**/*.py"))
print(f"Analyzing {len(files)} files...")

issues = {
    'high_complexity': [],
    'unused_imports': [],
    'large_files': [],
    'no_docstrings': []
}
issues = {"high_complexity": [], "unused_imports": [], "large_files": [], "no_docstrings": []}

# Analyze each file (metadata only, not source!)
for file in files:
@@ -26,45 +22,26 @@ for file in files:
    deps = analyze_dependencies(file_str)

    # Flag high complexity
    if deps.get('complexity', 0) > 15:
        issues['high_complexity'].append({
            'file': file_str,
            'complexity': deps['complexity'],
            'functions': deps['functions'],
            'avg_complexity': deps.get('avg_complexity_per_function', 0)
        })
    if deps.get("complexity", 0) > 15:
        issues["high_complexity"].append(
            {"file": file_str, "complexity": deps["complexity"], "functions": deps["functions"], "avg_complexity": deps.get("avg_complexity_per_function", 0)}
        )

    # Flag large files
    if deps.get('lines', 0) > 500:
        issues['large_files'].append({
            'file': file_str,
            'lines': deps['lines'],
            'functions': deps['functions']
        })
    if deps.get("lines", 0) > 500:
        issues["large_files"].append({"file": file_str, "lines": deps["lines"], "functions": deps["functions"]})

    # Find unused imports
    unused = find_unused_imports(file_str)
    if unused:
        issues['unused_imports'].append({
            'file': file_str,
            'count': len(unused),
            'imports': unused
        })
        issues["unused_imports"].append({"file": file_str, "count": len(unused), "imports": unused})

# Return summary (NOT all the data!)
result = {
    'files_audited': len(files),
    'total_lines': sum(d.get('lines', 0) for d in [analyze_dependencies(str(f)) for f in files]),
    'issues': {
        'high_complexity': len(issues['high_complexity']),
        'unused_imports': len(issues['unused_imports']),
        'large_files': len(issues['large_files'])
    },
    'top_complexity_issues': sorted(
        issues['high_complexity'],
        key=lambda x: x['complexity'],
        reverse=True
    )[:5]  # Only top 5
    "files_audited": len(files),
    "total_lines": sum(d.get("lines", 0) for d in [analyze_dependencies(str(f)) for f in files]),
    "issues": {"high_complexity": len(issues["high_complexity"]), "unused_imports": len(issues["unused_imports"]), "large_files": len(issues["large_files"])},
    "top_complexity_issues": sorted(issues["high_complexity"], key=lambda x: x["complexity"], reverse=True)[:5],  # Only top 5
}

print(f"\\nAudit complete:")
+7 −11
Original line number Diff line number Diff line
@@ -9,28 +9,24 @@ from api.code_analysis import find_functions
from api.filesystem import copy_lines, paste_code, read_file, write_file

# Find utility functions (returns metadata ONLY, not source code)
functions = find_functions('app.py', pattern='.*_util$', regex=True)
functions = find_functions("app.py", pattern=".*_util$", regex=True)

print(f"Found {len(functions)} utility functions")

# Extract imports from original file
content = read_file('app.py')
imports = [line for line in content.splitlines()
           if line.strip().startswith(('import ', 'from '))]
content = read_file("app.py")
imports = [line for line in content.splitlines() if line.strip().startswith(("import ", "from "))]

# Create new utils.py with imports
write_file('utils.py', '\\n'.join(set(imports)) + '\\n\\n')
write_file("utils.py", "\\n".join(set(imports)) + "\\n\\n")

# Copy each function to utils.py
for func in functions:
    print(f"  Moving {func['name']} (lines {func['start_line']}-{func['end_line']})")
    code = copy_lines('app.py', func['start_line'], func['end_line'])
    paste_code('utils.py', -1, code + '\\n\\n')  # -1 = append to end
    code = copy_lines("app.py", func["start_line"], func["end_line"])
    paste_code("utils.py", -1, code + "\\n\\n")  # -1 = append to end

result = {
    'functions_extracted': len(functions),
    'function_names': [f['name'] for f in functions]
}
result = {"functions_extracted": len(functions), "function_names": [f["name"] for f in functions]}

# Token usage: ~800 tokens
# vs ~15,000 tokens reading full file into context
Loading