Creo applicazioni web moderne e strumenti digitali personalizzati per aiutare le attività a crescere attraverso l'innovazione tecnologica. La mia passione è unire informatica ed economia per generare valore reale.
La mia passione per l'informatica è nata tra i banchi dell'Istituto Tecnico Commerciale di Maglie, dove ho scoperto il potere della programmazione e il fascino di creare soluzioni digitali. Fin da subito, ho capito che l'informatica non era solo codice, ma uno strumento straordinario per trasformare idee in realtà.
Durante gli studi superiori in Sistemi Informativi Aziendali, ho iniziato a intrecciare informatica ed economia, comprendendo come la tecnologia possa essere il motore della crescita per qualsiasi attività. Questa visione mi ha accompagnato all'Università degli Studi di Bari, dove ho conseguito la Laurea in Informatica, approfondendo le mie competenze tecniche e la mia passione per lo sviluppo software.
Oggi metto questa esperienza al servizio di imprese, professionisti e startup, creando soluzioni digitali su misura che automatizzano processi, ottimizzano risorse e aprono nuove opportunità di business. Perché la vera innovazione inizia quando la tecnologia incontra le esigenze reali delle persone.
Le Mie Competenze
Analisi Dati & Modelli Previsionali
Trasformo i dati in insights strategici con analisi approfondite e modelli predittivi per decisioni informate
Automazione Processi
Creo strumenti personalizzati che automatizzano operazioni ripetitive e liberano tempo per attività a valore aggiunto
Sistemi Custom
Sviluppo sistemi software su misura, dalle integrazioni tra piattaforme alle dashboard personalizzate
Credo fermamente che l'informatica sia lo strumento più potente per trasformare le idee in realtà e migliorare la vita delle persone.
🚀
Democratizzare la Tecnologia
La mia missione è rendere l'informatica accessibile a tutti: dalle piccole imprese locali alle startup innovative, fino ai professionisti che vogliono digitalizzare la propria attività. Ogni realtà merita di sfruttare le potenzialità del digitale.
💡
Unire Informatica ed Economia
Non è solo questione di scrivere codice: è capire come la tecnologia possa generare valore reale. Intrecciando competenze informatiche e visione economica, aiuto le attività a crescere, ottimizzare processi e raggiungere nuovi traguardi di efficienza e redditività.
🎯
Creare Soluzioni su Misura
Ogni attività è unica, e così devono esserlo le soluzioni. Sviluppo strumenti personalizzati che rispondono alle esigenze specifiche di ciascun cliente, automatizzando processi ripetitivi e liberando tempo per ciò che conta davvero: far crescere il business.
Trasforma la Tua Attività con la Tecnologia
Che tu gestisca un negozio, uno studio professionale o un'azienda, posso aiutarti a sfruttare le potenzialità dell'informatica per lavorare meglio, più velocemente e in modo più intelligente.
Il mio percorso accademico e le tecnologie che padroneggio
Certificazioni Professionali
8 certificazioni conseguite
Nuovo
Visualizza
Reinvention With Agentic AI Learning Program
Anthropic
Dicembre 2024
Nuovo
Visualizza
Agentic AI Fluency
Anthropic
Dicembre 2024
Nuovo
Visualizza
AI Fluency for Students
Anthropic
Dicembre 2024
Nuovo
Visualizza
AI Fluency: Framework and Foundations
Anthropic
Dicembre 2024
Nuovo
Visualizza
Claude with the Anthropic API
Anthropic
Dicembre 2024
Visualizza
Master SQL
RoadMap.sh
Novembre 2024
Visualizza
Oracle Certified Foundations Associate
Oracle
Ottobre 2024
Visualizza
People Leadership Credential
Connect
Settembre 2024
💻 Linguaggi & Tecnologie
☕Java
🐍Python
📜JavaScript
🅰️Angular
⚛️React
🔷TypeScript
🗄️SQL
🐘PHP
🎨CSS/SCSS
🔧Node.js
🐳Docker
🌿Git
💼
12/2024 - Presente
Custom Software Engineering Analyst
Accenture
Bari, Puglia, Italia · Ibrida
Analisi e sviluppo di sistemi informatici attraverso l'utilizzo di Java e Quarkus in Health and Public Sector. Formazione continua su tecnologie moderne per la creazione di soluzioni software personalizzate ed efficienti e sugli agenti.
💼
06/2022 - 12/2024
Analista software e Back End Developer Associate Consultant
Links Management and Technology SpA
Esperienza nell'analisi di sistemi software as-is e flussi ETL utilizzando PowerCenter. Formazione completata su Spring Boot per lo sviluppo di applicazioni backend moderne e scalabili. Sviluppatore Backend specializzato in Spring Boot, con esperienza in progettazione di database, analisi, sviluppo e testing dei task assegnati.
💼
02/2021 - 10/2021
Programmatore software
Adesso.it (prima era WebScience srl)
Esperienza nell'analisi AS-IS e TO-BE, evoluzioni SEO ed evoluzioni website per migliorare le performance e l'engagement degli utenti.
🎓
2018 - 2025
Laurea in Informatica
Università degli Studi di Bari Aldo Moro
Bachelor's degree in Computer Science, focusing on software engineering, algorithms, and modern development practices.
📚
2013 - 2018
Diploma - Sistemi Informativi Aziendali
Istituto Tecnico Commerciale di Maglie
Technical diploma specializing in Business Information Systems, combining IT knowledge with business management.
Contattami
Hai un progetto in mente? Parliamone! Compila il form qui sotto e ti risponderò al più presto.
* Campi obbligatori. I tuoi dati saranno utilizzati solo per rispondere alla tua richiesta.
06 - Prompt Engineering for AI IDEs: Writing Prompts That Actually Work
There is a clear line between a developer who uses an AI coding assistant and one
who knows how to communicate with it. The first gets mediocre results, needs many
iterations and ends up manually correcting generated code. The second gets precise output on
the first or second try, accelerates their workflow and builds persistent instruction systems
that improve over time. The difference lies entirely in prompt engineering.
In the vibe coding paradigm, the prompt is not just an input: it is the primary communication
interface between developer and AI. Prompt quality determines code quality, suggestion
relevance, debugging depth and refactoring soundness. Yet most developers use improvised,
vague and unstructured prompts, then complain about results.
This article explores prompt engineering as a technical discipline applied to software
development: from the anatomy of an effective prompt to advanced patterns like chain-of-thought
and meta-prompting, from configuring system files (CLAUDE.md,
.cursorrules, .github/copilot-instructions.md) to prompt chaining
for complex tasks. With concrete TypeScript and Python examples throughout.
What You Will Learn
Anatomy of an effective prompt: Context, Constraints, Expected Output
How to configure CLAUDE.md, .cursorrules and copilot-instructions.md
Tool-specific prompts: Claude Code, Cursor and GitHub Copilot compared
Effective prompts for refactoring, debugging and testing with real examples
Prompt chaining to decompose complex tasks into manageable steps
The most common anti-patterns and how to avoid them
Metrics for measuring the quality of your prompts
The Anatomy of an Effective Prompt
An effective prompt for an AI coding assistant is not a simple sentence in natural language.
It is an information structure that gives the AI exactly what it needs to produce quality output.
Every effective prompt contains three fundamental components: context,
constraints and expected output.
1. Context: What the AI Knows About Your Problem
Context is the most critical and most often neglected component. The AI does not know your project,
your architecture, your coding standards or your business requirements. Without context, it produces
generic code that might work abstractly but not in your specific case. Effective context includes:
Technology stack: language, framework, specific versions
Current architecture: patterns used, project structure
Relevant code: functions, classes, interfaces related to the problem
Business objective: why you are making this change
Current state: what works, what does not, what exactly the problem is
2. Constraints: The Boundaries of Acceptable Solutions
Constraints define the limits of acceptable solutions. Without constraints, the AI may propose
technically correct solutions that are incompatible with your context: different libraries from
those approved, architectures that break backward compatibility, patterns misaligned with team
standards. Typical constraints include:
Approved or prohibited libraries
Compatibility with specific versions of Node, TypeScript, Python, etc.
Team standards (naming conventions, minimum test coverage)
Security constraints (no eval, no shell injection, input validation)
3. Expected Output: The Shape of the Result
Specifying the output format is essential. A prompt saying "improve this function" can get a
text response, inline code, a list of suggestions or a complete refactoring. Specifying what
you want reduces ambiguity:
Format: code directly, explanation + code, list of changes
Scope: just the function, the entire module, with tests included
Detail level: with comments, with TypeScript types, with error handling
Next action: apply the change, show a diff, explain first then implement
Compare a vague prompt versus a well-structured one for the same task:
Comparison: vague prompt vs structured prompt
# VAGUE PROMPT - Unpredictable result
"Improve this authentication function"
# STRUCTURED PROMPT - Predictable result
"""
Context:
- Stack: TypeScript 5.4, Node.js 22, Express 5, JWT
- Current authentication function (see below)
- Architecture: Repository Pattern, services separate from controllers
Problem:
The validateToken function throws untyped exceptions and does not handle
expired refresh tokens. In prod we had 3 crashes in the last 24h.
Constraints:
- Do not change the public interface (backward compatibility)
- Use only libraries already in package.json (jsonwebtoken, zod)
- Maintain compatibility with Node.js 20+
- Minimum test coverage 80%
Expected output:
1. validateToken function refactored with typed error handling
2. Custom types for errors (AuthError with specific codes)
3. Jest unit tests for the main error cases
Current code:
async function validateToken(token: string) {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
return decoded;
}
"""
The Explicit Context Rule
Research on prompt engineering shows that providing context at the beginning and end of the
prompt is significantly more effective than placing it in the middle. The model tends to give
more weight to information in the opening and closing positions. For long prompts, repeat the
most critical constraint at the end with a phrase like "Remember: do not change the public
interface of the function."
Core Prompt Engineering Patterns
Prompt engineering patterns emerge from academic research on LLM interaction but apply
concretely to daily work with AI coding assistants. Knowing them lets you choose the right
tool for each type of problem.
Zero-Shot Prompting
The simplest pattern: ask the AI to perform a task without providing examples. Works well for
standard, well-defined tasks where the model has abundant training data. It is the starting
point for any interaction.
Zero-shot for TypeScript code generation
# Zero-shot: standard task with minimal context
"""
Create a React custom hook in TypeScript to manage HTTP requests with:
- Typed loading, data, error state using generics
- Automatic cancellation with AbortController on unmount
- Automatic retry with exponential backoff (max 3 attempts)
- In-memory cache to avoid duplicate requests within 30 seconds
Use React 18+ and TypeScript strict mode.
"""
Few-Shot Prompting
Provide examples of the pattern you want to replicate. Particularly effective for establishing
project-specific naming conventions, data structures or non-standard code styles. The model
"learns" the pattern from examples and applies it to the new case.
Few-shot to respect project naming conventions
# Few-shot: establish the project error handling pattern
"""
In our project we use this pattern for errors:
EXAMPLE 1 - Repository error:
class UserNotFoundError extends AppError {
constructor(userId: string) {
super(`User
#123;userId} not found`, 'USER_NOT_FOUND', 404);
}
}
EXAMPLE 2 - Validation error:
class InvalidEmailError extends AppError {
constructor(email: string) {
super(`Invalid email format: #123;email}`, 'INVALID_EMAIL', 422);
}
}
EXAMPLE 3 - External service error:
class PaymentGatewayError extends AppError {
constructor(provider: string, details: string) {
super(`Payment failed via #123;provider}: #123;details}`, 'PAYMENT_FAILED', 502);
}
}
Following exactly this pattern, create errors for the auth module:
TokenExpiredError, InvalidCredentialsError,
AccountLockedError and SessionLimitExceededError.
"""
Chain-of-Thought (CoT) Prompting
Ask the AI to reason step by step before providing the final code. This approach is particularly
effective for complex algorithmic problems, performance optimizations and hard-to-trace bugs.
Structured Chain-of-Thought (SCoT) research shows accuracy improvements of 15.27% for code
generation tasks compared to standard prompts.
Chain-of-thought for performance optimization
# Chain-of-thought: debugging a performance problem
"""
I have a Python function processing 50,000 records in ~40 seconds.
I need to get it under 2 seconds. Before writing code, reason
step by step:
1. Analyze the bottlenecks in the current code
2. Identify suboptimal data structures
3. Evaluate which optimizations have the biggest impact
4. Only then propose the optimized implementation
Current code:
def find_duplicate_transactions(transactions: list[dict]) -> list[dict]:
duplicates = []
for i, t1 in enumerate(transactions):
for j, t2 in enumerate(transactions):
if i != j:
if (t1['amount'] == t2['amount'] and
t1['merchant'] == t2['merchant'] and
t1['date'] == t2['date']):
if t1 not in duplicates:
duplicates.append(t1)
return duplicates
Constraints: Python 3.11+, no external libraries beyond stdlib
"""
Meta-Prompting
Meta-prompting uses the AI to generate or improve prompts themselves. It is an advanced technique
particularly useful when you do not know exactly how to frame a complex problem, or when you want
to optimize an existing prompt. The AI becomes a collaborator in prompt creation.
Meta-prompting: asking the AI to improve your prompt
# Meta-prompt: optimize a prompt for REST API generation
"""
This is the prompt I wrote to generate a REST API:
"Create an API to manage e-commerce orders in Node.js with TypeScript"
Analyze this prompt and:
1. Identify missing information that would make the result more precise
2. Suggest an improved version of the prompt with the necessary information
3. Explain why each addition improves the final result
Do not generate the API yet: let's optimize the prompt first.
"""
# The AI responds with an improved prompt like:
"""
[AI Suggestion]: The improved prompt should include:
- Specific framework (Express vs Fastify vs NestJS)
- Architectural pattern (MVC, Clean Architecture, DDD)
- Database (PostgreSQL, MongoDB) and ORM (Prisma, TypeORM)
- Authentication (JWT, OAuth2, API Key)
- Domain-specific order endpoints
- Validation and error handling requirements
- API standard (REST, JSON:API, HAL)
...
"""
Which Pattern to Choose?
Zero-shot: Standard, well-defined tasks with clear technical vocabulary
Few-shot: Project conventions, non-standard patterns, specific styles
Meta-prompting: When the problem is poorly defined, when you want to improve your prompts
System Prompts and Persistent Configuration Files
One of the most common mistakes in using AI coding assistants is having to repeat project
context in every conversation. Modern tools allow you to define persistent instructions that
are automatically included in every session. This is where you get the biggest long-term
productivity gains.
CLAUDE.md: Claude Code's Operating System
CLAUDE.md is the configuration file read by Claude Code at the start of every
session. According to an analysis of 328 public CLAUDE.md files, the most common categories
are: Architecture (72.6%), Development Guidelines (44.8%), Project Overview (39.0%),
Testing (35.4%) and Commands (33.2%). A good CLAUDE.md transforms Claude from a generic
assistant into an expert on your specific project.
CLAUDE.md - Effective structure for an Angular/Node.js project
# CLAUDE.md - Angular Portfolio + Node.js API
## Project Overview
Professional portfolio with blog, developer tools and API.
Frontend: Angular 21 + SSR. Backend: Node.js 22 + Express 5.
Deploy: Firebase Hosting (frontend) + Cloud Run (API).
## Tech Stack
- **Frontend**: Angular 21 standalone components, TypeScript 5.4, Signals
- **Backend**: Node.js 22, Express 5, TypeScript, Prisma ORM
- **Database**: PostgreSQL 16 (prod), SQLite (dev)
- **Testing**: Jest (unit), Cypress (e2e)
- **Linting**: ESLint 9 + Prettier
## Frontend Architecture
- Standalone components (no NgModules)
- Signals for reactive state (no RxJS where avoidable)
- OnPush change detection on all components
- Lazy loading for all routes
- CSS scoped to components (no global styles except design system)
## Backend Architecture
- Clean Architecture: routes -> controllers -> services -> repositories
- Repository Pattern for data access
- Centralized error handling with AppError and globalErrorHandler
- Input validation with Zod on all endpoints
- JWT + refresh token for authentication
## Essential Commands
```bash
# Frontend
npm start # dev server
npm run build # production build with SSR
npm test # unit tests
# Backend
npm run dev # dev with hot reload
npm run migrate # Prisma migrations
npm run seed # seed database
```
## Code Standards
- TypeScript strict mode: always
- No `any`: use explicit types or generics
- Functions: max 30 lines. Files: max 400 lines.
- Names: kebab-case for files, PascalCase for classes, camelCase for variables
- Imports: absolute paths with @ alias, no relative paths beyond 2 levels
## CRITICAL Security Rules
- NEVER hardcode secrets or API keys
- ALWAYS validate input with Zod before processing
- ALWAYS use parameterized queries (Prisma handles this)
- Rate limiting on all public endpoints
- CORS configured with explicit allowlist
## Testing
- Minimum coverage: 80% for services and repositories
- Test names: should + behavior (e.g.: "should return 404 when user not found")
- Mock: only external dependencies (DB, APIs), not internal logic
- E2E: critical user flows (auth, checkout, API key creation)
## Known Pitfalls
- SSR: use isPlatformBrowser() for browser-only APIs (localStorage, window)
- Prisma: use transactions for multi-step operations
- JWT: refresh token must be httpOnly cookie, not localStorage
## Cross-References
See @docs/architecture.md for diagrams
See @docs/api-patterns.md for REST conventions
See @.env.example for required environment variables
.cursorrules: Cursor's Configuration
Cursor uses a .cursorrules file at the project root (or the
.cursor/rules/ folder for more granular rules) to customize AI behavior.
Cursor supports three rule types: Always (applied to all files),
Auto-Apply (applied to files matching a glob pattern) and
Agent Select (applied occasionally on request).
.cursorrules - Configuration for TypeScript/Angular project
# .cursorrules - TypeScript Angular Project
## Role
You are a senior Angular developer. You know this project and follow
its conventions without exceptions. When generating code, always respect
these standards.
## TypeScript
- strict mode enabled: no implicit any, no unused variables
- Prefer `interface` over `type` for objects
- Use generics instead of `any` for reusable functions
- Destructuring where possible
- Async/await instead of Promise chains
## Angular Specific
- ONLY standalone components, never NgModules
- Signals for local state and computed values
- `inject()` instead of constructor injection
- OnPush change detection mandatory
- `ngOptimizedImage` for all static images
- Lazy loading on all feature routes
## CSS Style
- CSS custom properties (design system variables in styles.css)
- BEM naming for custom classes
- No inline styles in HTML templates
- Mobile-first with breakpoints: 768px (tablet), 1024px (desktop)
## Code NOT to Generate
- NgModules or forRoot/forChild patterns
- Class-based guards or resolvers (use functional guards)
- RxJS for simple state (use signals)
- setTimeout or setInterval (use RxJS timer if necessary)
- document.querySelector (use ViewChild or renderer)
## Output Format
When generating a component, always include:
1. The .ts file with the component
2. The .html file with the template
3. The .css file if there are specific styles
4. A .spec.ts file with minimal tests
.github/copilot-instructions.md: Custom Instructions for Copilot
GitHub Copilot uses the .github/copilot-instructions.md file for persistent
repository-level instructions. According to GitHub, this file is designed to "onboard" the
repository to the Copilot agent, providing the information needed to work efficiently from the
first interaction. In August 2025, Copilot introduced automatic generation of custom
instructions by analyzing repository structure.
.github/copilot-instructions.md - Example for Node.js team
# Copilot Custom Instructions
This is a TypeScript Node.js microservices project following Clean Architecture.
## Project Structure
- `src/domain/` - Business entities and domain logic (no framework dependencies)
- `src/application/` - Use cases and application services
- `src/infrastructure/` - Database, external APIs, repositories
- `src/presentation/` - HTTP controllers and routes
## Coding Standards
Always use TypeScript. Prefer `interface` over `type` for object shapes.
Use `Result<T, E>` pattern for error handling, never throw from business logic.
All public functions must have JSDoc with @param and @returns.
## Testing Requirements
Write Jest unit tests for all domain and application layer code.
Mock infrastructure dependencies. Target 85% coverage.
Test file naming: `*.spec.ts` co-located with source files.
## API Conventions
REST endpoints follow kebab-case: `/api/user-profiles` not `/api/userProfiles`.
All responses use the envelope: `{ data, error, meta }`.
HTTP status codes: 200 (success), 201 (created), 400 (validation),
401 (auth), 403 (authz), 404 (not found), 500 (server error).
## Security
Validate all input with Zod schemas at the presentation layer.
Never log sensitive data (passwords, tokens, PII).
Use parameterized queries via Prisma - never string concatenation.
## Dependencies
Approved: express, zod, prisma, jsonwebtoken, winston, jest
Restricted: lodash (use native JS), moment (use date-fns), axios (use fetch)
Forbidden: eval, vm, child_process (unless in dedicated worker)
Each type of development task has different characteristics and requires prompts structured
differently. Here are the optimal structures for the three most common tasks: refactoring,
debugging and test generation.
Prompts for Refactoring
A good refactoring prompt must specify the direction of change (toward which
pattern or principle you are migrating), the compatibility constraints (what
must not change) and the success criterion (how to know the refactoring
succeeded).
Prompt template for TypeScript refactoring toward Clean Architecture
# Template: Refactoring Prompt
"""
Task: Refactoring toward [TARGET PATTERN]
Current code:
[PASTE THE CODE TO REFACTOR]
Motivation:
[Explain why you are doing this refactoring]
E.g.: "This service does too many things. Violating SRP makes testing hard."
Refactoring direction:
[Describe the pattern or principle you are migrating toward]
E.g.: "Separate business logic from persistence with Repository Pattern"
CRITICAL constraints (do not modify):
- The public interface of UserService must remain unchanged
- Existing tests must continue to pass
- No new dependencies beyond those in package.json
Success criteria:
- UserService no longer imports Prisma directly
- IUserRepository is an injectable interface
- UserRepository can be mocked in tests without Prisma
Expected output:
1. UserRepository interface (src/domain/repositories/IUserRepository.ts)
2. PrismaUserRepository implementation (src/infrastructure/repositories/)
3. Updated UserService using the interface
4. Updated tests with repository mock
"""
Prompts for Debugging
Debugging is the task where context is most critical. The AI did not see the crash: it must
reconstruct the situation from the fragments you provide. An effective debugging prompt provides
the complete error message, stack trace, relevant code and environmental context.
Prompt template for debugging with stack trace
# Template: Debugging Prompt
"""
Environment:
- Node.js 22.3.0, TypeScript 5.4, Express 5.0.1
- Database: PostgreSQL 16 via Prisma 5.14
- OS: Ubuntu 22.04 (prod), macOS 14 (dev) - bug in BOTH
Error (complete message):
TypeError: Cannot read properties of undefined (reading 'email')
at UserController.createUser (src/controllers/user.controller.ts:47:28)
at Layer.handle [as handle_request] (express/lib/router/layer.js:95:5)
at next (express/lib/router/route.js:149:13)
...
Code where error occurs (user.controller.ts, lines 40-55):
[PASTE THE RELEVANT LINES]
Input data (from debug log):
Request body: { "name": "Mario", "role": "admin" }
Request headers: { "content-type": "application/json", "x-api-key": "..." }
What I already tried:
1. Added console.log at line 46: body.email is undefined
2. Checked middleware: body-parser is configured correctly
3. Tested with Postman with email present: works fine
Hypothesis:
Could the Zod validator not correctly transform optional values?
The schema accepts email as optional but the controller always assumes it is there.
I need:
1. Identify the root cause of the problem
2. Explain why it manifests only when email is missing
3. Propose the fix
4. Suggest how to prevent similar errors in the future
"""
Prompts for Test Generation
AI-generated tests are often too shallow if not properly guided. A good testing prompt must
specify the framework, the test type (unit, integration, e2e),
the edge cases to cover and the expected coverage level.
Prompt template for Jest/TypeScript unit test generation
# Template: Test Generation Prompt
"""
Framework: Jest 29 + TypeScript. Testing library: @testing-library/react (if UI).
Test type: Unit tests for the following function.
Function to test:
export function calculateShippingCost(
orderValue: number,
destination: 'IT' | 'EU' | 'WORLD',
weight: number,
options: { express?: boolean; insurance?: boolean } = {}
): ShippingCost {
// implementation...
}
Business rules (MUST test):
- Free shipping for orders > 50 EUR in IT, > 100 EUR in EU
- Express: +50% on base cost
- Insurance: 2% of order value, min 2 EUR
- Max weight: 30kg, above throws WeightLimitExceededError
- WORLD destination: always paid, no free shipping
Cases to cover MANDATORILY:
1. Happy path: all destination types
2. Free shipping: exact thresholds (49.99 EUR vs 50.00 EUR vs 50.01 EUR)
3. Combined options: express + insurance together
4. Edge cases: weight 0, weight 30, weight 30.01 (expected error)
5. Negative values: negative orderValue (expected validation)
6. Rounding: verify prices have max 2 decimal places
Test format:
describe('calculateShippingCost', () => {
describe('[category]', () => {
it('should [behavior] when [condition]', () => {
// arrange - act - assert
});
});
});
Expected coverage: 100% branch coverage on the function.
"""
Prompt Chaining for Complex Tasks
Prompt chaining is the technique of breaking a complex task into a sequence of connected
sub-tasks, where each step's output becomes the next step's input. It is particularly effective
when a single long prompt produces inconsistent or incomplete results. The key is
decomposition: identifying natural separation points in the problem.
Example: Complete Feature with Prompt Chaining
Imagine implementing an email notification system with rate limiting, queue and retry.
A single prompt for the entire feature often produces incomplete code. With chaining:
Prompt chaining: email notification system (5 steps)
# STEP 1: Architecture design (no code yet)
"""
Before writing any code, design the architecture for an email
notification system with these characteristics:
- Rate limiting: max 100 emails/hour per user
- Persistent queue with Redis (Bullmq)
- Automatic retry with exponential backoff (max 3 times)
- Email templates with variables
- Tracking: sent, failed, bounced, opened
Expected output:
1. Text diagram of components and relationships
2. List of TypeScript interfaces (without implementation)
3. Database schema for tracking
4. Potential issues and how we address them
Do not write code yet: I want to validate the architecture first.
"""
# STEP 2: Interfaces and types (based on step 1 output)
"""
The approved architecture is what you proposed in the previous step.
Now implement ONLY the TypeScript interfaces:
- IEmailService with all methods
- IEmailTemplate with typed variables
- INotificationJob for the queue
- IEmailTrackingEvent for tracking events
- Domain-specific error types
No implementations: interfaces and types only.
File: src/domain/interfaces/email.interfaces.ts
"""
# STEP 3: Core service implementation
"""
Interfaces are defined. Implement EmailService implementing
IEmailService using:
- nodemailer for sending
- Zod for input validation
- Winston for logging
Assume IEmailTemplateRenderer and IEmailQueue already exist
as injected dependencies. Use types defined in step 2.
Tests: include unit tests for send() and sendBulk() methods
mocking external dependencies.
"""
# STEP 4: Queue and rate limiting
"""
Implement EmailQueue using Bullmq that:
- Respects the 100 emails/hour rate limit per user
- Implements retry with backoff: 1min, 5min, 15min
- Persists jobs in Redis
- Emits events for tracking
Use the IEmailQueue interface defined in step 2.
Do not re-implement EmailService: use it as a dependency.
"""
# STEP 5: Integration and end-to-end tests
"""
All components are implemented. Now:
1. Create the DI container that wires everything together
2. Write an integration test simulating:
- 150 emails in 1 hour (must respect rate limit)
- A send that fails 2 times and succeeds on 3rd attempt
- A send that exhausts retries and goes to dead letter queue
3. Create a README.md for the module with setup and usage examples
"""
Benefits of Prompt Chaining
Granular control: validate each step before proceeding
Localized corrections: if one step is wrong, fix only that one
Clean context: each step has a precise, focused context
More reliable output: the model focuses on one problem at a time
Traceability: you can restart from any step in the chain
Conditional Prompt Chaining
Conditional chaining introduces decisions based on the output of previous steps. It is useful
when the path forward depends on the AI's initial analysis.
Conditional prompt chaining: code review with branching
# STEP 1: Preliminary analysis
"""
Analyze this Python function and classify the issues found
into three categories:
- CRITICAL: bugs or vulnerabilities that must be fixed immediately
- IMPORTANT: design issues or SOLID principle violations
- SUGGESTION: optional optimizations or improvements
def process_user_data(user_id, data):
conn = sqlite3.connect('users.db')
query = f"UPDATE users SET data = '{data}' WHERE id = {user_id}"
conn.execute(query)
conn.commit()
conn.close()
return True
Respond ONLY with the classification. Do not propose solutions yet.
"""
# STEP 2a (if CRITICAL issues exist): Immediate fixes
"""
You found [N] CRITICAL issues. Start with those.
For each critical issue:
1. Explain the specific vulnerability (e.g.: SQL injection via f-string)
2. Show the vulnerable code highlighted
3. Propose the fix with explanation
4. Indicate the relevant CVE or OWASP category
After the criticals, we'll address others in separate sessions.
"""
# STEP 2b (if no CRITICAL issues): General refactoring
"""
No critical issues found. Proceed with the full refactoring
addressing IMPORTANT issues. Maintain the same function signature
for compatibility with calling code.
"""
Claude Code vs Cursor vs Copilot: Prompt Comparison
Each tool has different characteristics that influence how to structure prompts. Knowing these
differences lets you adapt your communication to maximize output quality.
Claude Code: Prompts for End-to-End Operations
Claude Code operates in the terminal with access to the filesystem, bash commands and the entire
codebase. It is optimal for autonomous end-to-end tasks where you want to completely delegate a
sub-task. Prompts for Claude Code tend to be more declarative and goal-oriented.
Claude Code: prompt for autonomous end-to-end task
# Claude Code: effective prompt for autonomous task
"""
Implement the authentication module for this API.
Functional requirements:
- Registration with email/password (bcrypt hash, salt 12)
- Login with JWT access token (15min) + refresh token (7 days httpOnly cookie)
- Logout that invalidates the refresh token
- Refresh token endpoint
- Password reset via email (token with 1h expiry)
Stack: Node.js 22, Express 5, Prisma with PostgreSQL, Zod
Expected structure:
- src/modules/auth/ with controller, service, repository
- src/domain/interfaces/IAuthService.ts
- Prisma migrations for refresh_tokens table
- Jest tests for AuthService (unit) and auth routes (integration)
- README update with new endpoints
Proceed autonomously. At the end, show me a summary of the
changes made and how to test the result.
"""
Cursor: Prompts for Multi-File Editing
Cursor excels at coordinated changes across multiple files with Composer. Prompts for Cursor
are effective when they explicitly specify the files to modify and the relationships between changes.
Cursor: prompt for coordinated multi-file refactoring
# Cursor Composer: prompt for coordinated refactoring
"""
Refactoring: migrate from class-based services to functional services with
dependency injection via closures.
Files to modify (in this order):
1. src/services/userService.ts (current class)
2. src/controllers/userController.ts (uses the service)
3. src/app.ts (DI container)
4. src/services/__tests__/userService.test.ts (update mocks)
Target pattern:
// BEFORE (class-based)
class UserService {
constructor(private repo: UserRepository) {}
async findById(id: string) {...}
}
// AFTER (functional)
const createUserService = (repo: IUserRepository) => ({
findById: async (id: string) => {...},
});
type UserService = ReturnType<typeof createUserService>;
Maintain the same public interface of the methods.
Existing tests must continue to pass without changes
to test logic, only to mock/setup.
"""
GitHub Copilot: Prompts via Comments and Chat
GitHub Copilot responds to inline comments in code and the IDE-integrated chat. The most
effective prompts for Copilot are contextual: cursor positioned at the right point and a
descriptive comment are often more effective than long explanations in chat.
Copilot: prompts via inline comments and contextual chat
// TECHNIQUE 1: Comments as prompts (file: src/utils/validation.ts)
// Validate email with RFC 5322 standard, return detailed error object
// with field name, message, and IETF error code. No external deps.
export function validateEmail(email: string): ValidationResult {
// Copilot suggests the implementation here
}
// TECHNIQUE 2: JSDoc as spec (more context = more accurate)
/**
* Calculate final price with cascading discounts applied in order:
* 1. Quantity discount (above 10: -5%, above 50: -10%, above 100: -20%)
* 2. User discount (0-30%, applied to price after quantity discount)
* 3. Promotional code (fixed amount, applied last)
*
* @throws {InvalidDiscountError} if userDiscount > 30 or promoAmount < 0
* @returns price rounded to 2 decimal places
*/
function calculateFinalPrice(
basePrice: number,
quantity: number,
userDiscount: number,
promoAmount: number
): number {
// Copilot implements following the JSDoc spec
}
// TECHNIQUE 3: Copilot Chat with code selection
// [Select the code] then in chat:
// "@workspace Refactor this method toward the Result<T,E> pattern
// we use in other parts of the project (see src/shared/Result.ts)"
Tool
Strength
Optimal Prompt
Context Window
Claude Code
End-to-end tasks, deep reasoning
Declarative goals, broad autonomy
200K tokens
Cursor
Multi-file editing, codebase awareness
Specific files, coordinated patterns
128K tokens (Composer)
Copilot
Autocomplete, inline suggestions
Inline comments, precise JSDoc
Contextual to file
Windsurf
Proactive context, exploration
High-level goals, explores autonomously
128K tokens
Anti-Patterns: Prompts That Do Not Work
Knowing anti-patterns is as important as knowing patterns. These are the prompts that
systematically produce low-quality output or out-of-context results.
Anti-Pattern 1: The Vague Prompt
The vague prompt is the most common. It requests changes without specifying the direction,
constraint or success criterion. The AI produces something plausible but often misaligned
with actual needs.
Anti-pattern vs correct pattern: vague prompts
# ANTI-PATTERN: vague and without context
"Improve this function"
"Optimize performance"
"Make the code cleaner"
"Add error handling"
"Refactor this"
# CORRECT PATTERN: specific and contextualized
"Optimize this Python function with O(n^2) complexity to O(n log n)
using a heap. Maintain the same output but use only stdlib.
Include before/after benchmarks with timeit."
"Add error handling to this Express endpoint that currently throws
unhandled exceptions. Use the centralized AppError pattern already
defined in src/errors/AppError.ts. Test cases 400, 401, 404, 500."
Anti-Pattern 2: The Overly Long Prompt
Paradoxically, excessively long prompts (over 2,000 tokens) tend to produce worse results
than well-structured medium prompts. The model may lose focus on critical details buried in
the middle of the text, or try to satisfy too many requirements simultaneously producing
mediocre code on all fronts.
The 2,000-Token Limit for Prompts
Empirical research suggests that prompts exceeding 2,000 tokens for single tasks tend to
produce results of decreasing quality. If your prompt approaches this threshold, it is a
signal that you are trying to do too much at once. Break the task down with prompt chaining.
The exception is providing source code as context: including the complete file often improves
the result compared to extracting only a portion.
Anti-Pattern 3: Assuming Implicit Context
The AI does not remember previous conversations (except within the current session context) and
does not know your specific project unless you have described it. Assuming the "obvious" is
implicit is a systematic source of ineffective prompts.
Anti-pattern: implicit vs explicit context
# ANTI-PATTERN: assumes implicit context
"Add the 'status' field to the model"
# The AI doesn't know: which model? which database? which ORM?
# what type for 'status'? what allowed values? what migration?
"Use our standard pattern for this"
# The AI doesn't know your "standard pattern"
"How we usually handle errors"
# The AI doesn't know how you handle errors in your project
# CORRECT PATTERN: always explicit context
"Add the 'status' field to the User model in Prisma (prisma/schema.prisma).
Type is an enum: ACTIVE, INACTIVE, SUSPENDED, DELETED.
Default: ACTIVE. Required (not nullable).
Generate the Prisma migration. Update UserRepository.findByStatus(status).
Update existing tests."
Anti-Pattern 4: The "Do Everything" Mega-Prompt
Asking the AI to implement an entire feature in a single prompt almost always produces
incomplete results or with gaps. The model is forced to make choices about details you have
not specified, and the choices do not always match your expectations.
Anti-pattern: mega-prompt vs effective prompt chaining
# ANTI-PATTERN: mega-prompt
"Implement a complete user management system with CRUD,
JWT authentication, roles and permissions, rate limiting, audit log,
password reset via email, 2FA with TOTP, OAuth2 with Google and GitHub,
multi-device sessions, token revocation, GDPR export and deletion,
admin dashboard, complete tests and API documentation."
# Result: generic, incomplete code without adaptation to your stack
# CORRECT PATTERN: decomposition and priority
# Step 1: "Implement basic CRUD for User (POST/GET/PUT/DELETE) with Prisma"
# Step 2: "Add JWT authentication to the existing users module"
# Step 3: "Implement the RBAC role system based on these requirements..."
# Step 4: "Add rate limiting with Redis using the pattern already in AuthController"
# Validate each step before proceeding
Anti-Pattern 5: Not Specifying the Output Format
If you do not specify the output format, the AI chooses what it considers appropriate. Sometimes
that is fine, but often you get long explanations when you just wanted code, or code without
comments when you needed them, or a partial solution when you expected a complete one.
Specifying output format explicitly
# Explicit output format: examples
"Show me ONLY the code, without explanations."
"Provide a brief explanation of the approach first (max 3 lines),
then the complete code, then a usage example."
"Produce a diff in git format (+ for added lines, - for removed)."
"Show me only the lines that change, not the entire file."
"Respond with a JSON in this format:
{
'analysis': 'brief problem analysis',
'solution': 'solution code',
'tests': 'minimal unit tests',
'caveats': 'limitations or assumptions made'
}"
Metrics for Evaluating Prompt Quality
Like code, prompts can be improved iteratively by measuring their effectiveness. These metrics
let you objectively understand whether your prompts are improving over time.
Quantitative Metrics
Metric
How to Measure
Target
Iterations per task
Count prompts needed for acceptable output
<3 for standard tasks
Acceptance rate
% of output accepted without manual changes
>60% for repetitive tasks
Test pass rate on first run
% of generated code that passes tests without fixes
>70%
Review time
Average time to review and approve output
Decreasing over time
Revert rate
% of AI-assisted commits later reverted
<5%
The Continuous Improvement Cycle
The best prompts are not written on the first try: they evolve over time. Treat your prompt
templates like code: version them, test variations and update persistent configuration files
when you discover patterns that work better.
Continuous improvement workflow for prompts
# Workflow: iterative prompt improvement
# 1. Document prompts that work well
# Save in .claude/commands/ or in a docs/prompts/ folder
# 2. When a prompt fails, analyze why
"""
Failed prompt post-mortem:
- Prompt: "Add logging to the service"
- Problem: AI used console.log instead of winston
- Root cause: I did not specify the project logging library
- CLAUDE.md fix: added "Logging: always winston, never console.log"
"""
# 3. Update persistent configuration files
# CLAUDE.md updated with:
# ## Logging
# ALWAYS use winston (src/infrastructure/logger.ts)
# NEVER use console.log or console.error in production
# Levels: error, warn, info, debug
# Format: structured JSON with timestamp and request ID
# 4. Test the updated prompt
# 5. Measure the improvement (iterations needed)
Best Practices: The Prompt Engineering Decalogue
Here are the most effective practices derived from real-world experience with AI coding
assistants in professional development contexts.
Context first: every prompt must start with stack, versions and relevant
architecture. Take nothing for granted.
One task per prompt: maintain focus. If the task spans multiple dimensions,
use prompt chaining.
Specify constraints explicitly: what must not change is as important as
what must change.
Define the success criterion: how will you know if the result is acceptable?
Include it in the prompt.
Use persistent configuration files: CLAUDE.md, .cursorrules and
copilot-instructions.md are high-ROI investments with long-term benefits.
Match the pattern to the problem: zero-shot for standard tasks, few-shot
for custom conventions, CoT for algorithmic problems, meta-prompting for poorly defined problems.
Position critical context at the beginning and end: the model gives more
weight to opening and closing positions.
Iterate on the prompt, not just the code: if the output is not what you
expected, often the problem is in the prompt, not the model.
Document prompts that work: create a template library for recurring tasks
in your project.
Measure effectiveness: track iterations per task, acceptance rate and
review time to continuously improve.
Prompts Do Not Replace Review
Even the best-structured prompt does not guarantee 100% secure and correct code.
45% of AI-generated code fails security tests (Veracode 2025) regardless of prompt quality.
Prompt engineering reduces the number of iterations needed and improves output quality,
but critical human review remains indispensable, especially for code touching authentication,
authorization, sensitive data management and financial logic. Read the article on security
in vibe coding for specific guidelines.
Conclusion: Treat Prompts Like Code
Prompt engineering for AI IDEs is not a mysterious art reserved for those with special skills.
It is a technical discipline with clear principles, recognizable patterns and measurable metrics.
Like code, prompts are written, tested, refactored and improved over time.
Developers who get the best results from AI coding assistants share a systematic approach: they
invest time in configuring persistent files, use appropriate patterns for each type of task,
specify context and constraints explicitly and treat every failed prompt as an improvement
opportunity.
The difference between a developer who "uses" AI and one who "knows how to communicate" with it
is measured concretely: fewer iterations per task, more first-try acceptable outputs, less time
spent manually correcting generated code. The return on investment in prompt engineering is among
the highest in the entire vibe coding ecosystem.
The next article in the series tackles the most critical topic of this entire paradigm:
the security of AI-generated code. Because writing perfect prompts that produce
vulnerable code is not an improvement: it is a more subtle danger than visibly bad code.
Next Articles in the Series
07 - Security in Vibe Coding: The risks of AI-generated code,
the 45% that fails security tests and concrete mitigation strategies
08 - The Future of Agentic Development: 2026 trends, Anthropic
predictions on the developer's role and the autonomous agents roadmap
Key Takeaways
An effective prompt has three components: Context, Constraints, Expected Output
Use zero-shot for standard tasks, few-shot for custom conventions,
chain-of-thought for complex algorithmic problems
CLAUDE.md, .cursorrules and copilot-instructions.md are high-ROI investments:
configure once, benefit every session
Prompt chaining beats the mega-prompt for complex tasks: one task at a time,
validate each step
The most common anti-patterns are: vague, too long, implicit context,
unspecified output format
Treat prompts like code: version them, test them and improve them iteratively
The best prompt does not replace code review: 45% of AI code fails security
tests regardless of prompt quality
Related Articles
Vibe Coding Series: Read the other articles in the series for the
complete picture of vibe coding and agentic development
Claude Code Deep Dive: To master Anthropic's CLI and its
CLAUDE.md configuration system
Cursor IDE Series: For advanced cursorrules configuration and
the Cursor Composer workflow