A structured, role-differentiated training framework embedding AI across the full software delivery lifecycle — from PM to Database Engineer.
| SDLC Phase | AI Application | Tools / Courses | Primary Roles |
|---|---|---|---|
| Requirements | Draft user stories, BRDs, acceptance criteria from rough notes | Claude, Copilot in Word | PMTL |
| Design | Review wireframes, suggest patterns, generate API contracts | Claude, Copilot | TLDev |
| Development | Code generation, refactoring, docs, PR reviews | Claude Code, GitHub Copilot, Cursor | DevTL |
| Testing / QA | Generate test cases, edge cases, defect summaries, regression | Claude, Automated Testing for LLMOps | QA |
| Database | NL-to-SQL, schema docs, query optimisation, ETL support | Claude, DB Agent course | DB |
| CI/CD | Pipeline configs, Dockerfiles, IaC scripts, runbooks | Claude Code, MCP integrations | DevTL |
| Delivery | Sprint reviews, status reports, stakeholder comms, risk forecasting | Claude Cowork, Claude in Excel | PMDM |
All 36 courses mapped across 6 delivery roles. Filter by role or search by course name. Click any course to open it directly.
| Course | Source | PM | BA | SM | TL | Dev | DE | AIE | QA | DB | PBI | DM |
|---|
Each role has a curated, sequenced learning path. Mandatory courses first, then optional and advanced.
All platforms verified. URLs confirmed active. Sorted by relevance to IT delivery teams.
A phased rollout designed to build momentum — from shared vocabulary in Month 1 to production AI usage by Month 3.
| Metric | Target | Measured by | Timeline |
|---|---|---|---|
| % staff completing Tier 1 | 100% | Anthropic Academy dashboard | End Month 1 |
| % devs using AI on real tasks | 80%+ | Git commit logs + team leads | End Month 3 |
| AI-assisted PRs per sprint | Track trend | GitHub / Copilot analytics | Monthly |
| Test coverage delta (QA) | +15% target | CI/CD coverage reports | End Month 3 |
| Hrs saved per PM/week on docs | 3–5 hrs | PM self-report + AI Champions | Quarterly |
| Teams with active AI champion | 100% | CTO office tracking | End Month 1 |
Effective April 2026. Applies to all Delivery and Development roles. Review by: CTO / AI Programme Lead.
This policy establishes safe, responsible, and productive guidelines for the use of AI tools within the Delivery and Development department at Sparity Technologies. It applies to all roles including Project Managers, Team Leads, Developers, QA Engineers, Database Engineers, and Delivery Managers.
This policy covers any use of AI tools — whether for coding, documentation, analysis, communication, or testing — in the context of company or client work.
| Activity | Status | Guidance |
|---|
| Data type | Classification | AI tools permitted |
|---|---|---|
| Generic code snippets / pseudocode | Public | Any approved tool — Claude, ChatGPT, Copilot |
| Internal project code (non-client) | Internal | Claude (API/Enterprise) or GitHub Copilot only |
| Client source code | Confidential | Only tools with signed DPA and enterprise data agreement |
| Client data / PII | Restricted | PROHIBITED in all public AI tools without explicit DPA |
| Meeting notes / project names | Internal | Use anonymised descriptions. No client names in public AI. |
| Credentials / API keys / secrets | Critical | NEVER submit to any AI tool under any circumstances |
Any accidental submission of confidential or client data to an AI tool must be reported to the team lead and CTO within 24 hours. Include: the AI tool used, date/time, description of data submitted, client or project affected, and immediate action taken. Failure to report a known data incident is a separate and more serious policy violation.
Each delivery team designates one AI Champion responsible for: monitoring team compliance, logging AI use cases monthly, escalating requests for new tool approvals, and running internal knowledge-sharing sessions. The CTO reviews this policy quarterly.
| Violation type | Example | Consequence |
|---|---|---|
| Minor | Using unapproved tool for non-confidential task | Verbal warning. Mandatory refresher training. |
| Moderate | Sharing client project name in public AI | Written warning. Incident log. Lead review. |
| Serious | Uploading client code/data to public AI tool | Formal disciplinary. Client notification may be required. |
| Critical | Misrepresenting AI output; bypassing security controls | Immediate escalation to HR and CTO. May result in contract action. |