Sparity Technologies · Delivery & SDLC

AI Enablement
Programme 2026

A structured, role-differentiated training framework embedding AI across the full software delivery lifecycle — from PM to Database Engineer.

52
Curated courses
11
Target roles
7
Learning sources
95%
Free courses
4
Training tiers
Training Philosophy — 4-Tier Model
🧭
Tier 1 — Awareness
All staff · 2–6 hrs
AI literacy, ethics, shared vocabulary, first hands-on Claude session. Mandatory for every role before role-specific tracks begin.
🛡️
Tier 1.5 — AI Guardrails
All staff · 1–2 hrs
Read and acknowledge the AI Usage Policy. Understand approved tools, data classification, what is allowed vs prohibited. Completed before any tool access is granted.
⚙️
Tier 2 — Application
Role-specific · 8–16 hrs
AI applied to real workflows — writing requirements, coding, test generation, SQL support, delivery reporting. Core role productivity gains.
🔬
Tier 3 — Build
Devs, Leads · 20–40 hrs
APIs, agents, MCP integrations, multi-agent systems, automated evaluation pipelines. Building AI-powered internal tools and workflows.
Tier 1.5 is the gateway. No team member gets access to approved AI tools until Tier 1 (Awareness) and Tier 1.5 (AI Guardrails Policy acknowledgement) are both marked complete. AI Champions track this per team.
Key Updates from Original Curriculum
Anthropic Academy launched March 2026 with 16 free certified courses — now the primary platform for all Claude-specific training. All courses award completion certificates at no cost.
Google Bard course removed. Bard was rebranded to Gemini in Feb 2024. Replaced with the Gemini for Google Cloud learning path and the Generative AI Leader certification — both free and current.
Microsoft AI-900 retiring June 30, 2026. Teams should target the new AI-901 path. All free Microsoft Learn training paths remain active and updated.
AI Across the SDLC
SDLC PhaseAI ApplicationTools / CoursesPrimary Roles
RequirementsDraft user stories, BRDs, acceptance criteria from rough notesClaude, Copilot in WordPMTL
DesignReview wireframes, suggest patterns, generate API contractsClaude, CopilotTLDev
DevelopmentCode generation, refactoring, docs, PR reviewsClaude Code, GitHub Copilot, CursorDevTL
Testing / QAGenerate test cases, edge cases, defect summaries, regressionClaude, Automated Testing for LLMOpsQA
DatabaseNL-to-SQL, schema docs, query optimisation, ETL supportClaude, DB Agent courseDB
CI/CDPipeline configs, Dockerfiles, IaC scripts, runbooksClaude Code, MCP integrationsDevTL
DeliverySprint reviews, status reports, stakeholder comms, risk forecastingClaude Cowork, Claude in ExcelPMDM
Slide 14 — Presentation Ready

Role–Course Matrix

All 36 courses mapped across 6 delivery roles. Filter by role or search by course name. Click any course to open it directly.

M Mandatory
O Optional
A Advanced (Sr. only)
Not applicable
Filter by role
Course Source PM BA SM TL Dev DE AIE QA DB PBI DM
* Click any course name to open the course directly in a new tab.
Personalised Learning Paths

Courses by Role

Each role has a curated, sequenced learning path. Mandatory courses first, then optional and advanced.

Verified · April 2026

Learning Sources

All platforms verified. URLs confirmed active. Sorted by relevance to IT delivery teams.

ANT
Anthropic Academy
anthropic.skilljar.com
Launched March 2026. 16 free courses with certificates. The primary platform for all Claude-specific training. Covers Claude 101, API, MCP, Claude Code, Cowork, AI Fluency, agents, and subagents.
All roles100% freeCertificatesStart here
MS
Microsoft Learn
learn.microsoft.com
Comprehensive free learning paths for AI fundamentals, AI agents on Azure, GitHub Copilot, and Microsoft 365 Copilot. AI-900 retiring June 2026 — replaced by AI-901. Exams paid, training free.
PM, TL, DevTraining freeExams paid
GCP
Google Skills
skills.google (fmr. Cloud Skills Boost)
Gemini replaces Bard. Free foundational courses (Intro to GenAI, LLMs, Responsible AI). Generative AI Leader certification path is free — ideal for PMs and DMs. Hands-on labs require credits.
PM, DM, TLVideos freeLabs need credits
DL
DeepLearning.AI
deeplearning.ai/short-courses
35+ short courses (1–2.5 hrs each), free during beta. Andrew Ng + top AI vendors. Best practical, hands-on content. New in 2025–26: agent evaluation, automated testing, database agents, A2A protocol.
Dev, QA, DBFree (beta)Highly practical
CO
Coursera (AI For Everyone)
coursera.org
Andrew Ng's AI For Everyone (2.4M+ enrolled) remains the best non-technical foundation. Free to audit. Best opening course for PM, DM, and QA roles before any role-specific track.
PM, DM, QAAudit free
PMI
PMI (Project Mgmt Institute)
pmi.org
Free GenAI overview course specifically for PMs. Earns PDUs. Covers AI in scheduling, resource planning, risk management, and cost estimation. Best PM-specific AI course available for free.
PM onlyFree + PDUs
90-Day Rollout Plan

Delivery Roadmap

A phased rollout designed to build momentum — from shared vocabulary in Month 1 to production AI usage by Month 3.

W1
Weeks 1–2
Leadership sign-off & setup
Present 19-slide deck to CTO and management. Get tool access approved (Claude, GitHub Copilot). Assign AI Champions — one per team. Set up progress tracking via Anthropic Academy and Microsoft Learn dashboards. Draft AI Guardrails Policy v1 and distribute for review.
M1
Month 1
Foundation — All staff
Mandatory for everyone: AI For Everyone (Coursera, 6 hrs) + Intro to Prompt Engineering Fundamentals (Simplilearn, 1 hr) + AI Fluency: Framework & Foundations (Anthropic, 1 hr) + Claude 101 (Anthropic). Run internal "AI Day" workshop. Goal: shared vocabulary, ethics baseline, first Claude session for every role. Deliverable: all staff complete Tier 1 baseline.
M2
Month 2
Role tracks begin
PMs: PMI GenAI Overview + GenAI Leader path (Google). Devs: Vibe Coding + ChatGPT API + Claude Code in Action. QA: Prompt Eng for Devs + Automated Testing for LLMOps. DB: Vertex AI + Building Your Own Database Agent. TLs: MCP Basics + OpenAI Agents Guide + GitHub Copilot Fundamentals. DMs: Claude Cowork + GenAI Leader.
M3
Month 3
Pilot sprint — hands-on integration
Run a real project sprint with AI embedded at each SDLC layer. Devs use Claude Code on a live codebase. QA generates test suites with AI. PMs draft sprint documentation using Claude. DB tries NL-to-SQL on real schemas. Leads measure before/after velocity. AI Champions document wins and blockers. Key metric: every developer uses AI on at least one real task in production code.
M4
Month 4
Advanced tracks — Sr. Devs & Leads
Sr. Devs + TLs: CrewAI Multi-Agent Systems + MCP Advanced Topics + Introduction to Subagents + Evaluating AI Agents. DB: Building and Evaluating Data Agents + HuggingFace LLM Course (self-paced). AI Champions run internal "lunch and learn" sessions. Begin internal tool-building pilots.
M5+
Month 5 onwards
Expand to other departments
Apply the same 3-tier model to Sales, Support, HR, and Finance. Run internal certification assessments. Measure KPIs: sprint velocity delta, AI-assisted PR rate, test coverage increase, hours saved per PM per week. Publish internal AI usage report. Plan Year 2 advanced agent and automation builds.
Success KPIs
MetricTargetMeasured byTimeline
% staff completing Tier 1100%Anthropic Academy dashboardEnd Month 1
% devs using AI on real tasks80%+Git commit logs + team leadsEnd Month 3
AI-assisted PRs per sprintTrack trendGitHub / Copilot analyticsMonthly
Test coverage delta (QA)+15% targetCI/CD coverage reportsEnd Month 3
Hrs saved per PM/week on docs3–5 hrsPM self-report + AI ChampionsQuarterly
Teams with active AI champion100%CTO office trackingEnd Month 1
AI Safe Usage Guardrails Policy v1.0

AI Governance &
Usage Policy

Effective April 2026. Applies to all Delivery and Development roles. Review by: CTO / AI Programme Lead.

01 Purpose & scope

This policy establishes safe, responsible, and productive guidelines for the use of AI tools within the Delivery and Development department at Sparity Technologies. It applies to all roles including Project Managers, Team Leads, Developers, QA Engineers, Database Engineers, and Delivery Managers.

This policy covers any use of AI tools — whether for coding, documentation, analysis, communication, or testing — in the context of company or client work.

02 Core principles
Humans remain responsible
AI tools are assistants, not decision-makers. Every output used in company or client work must be reviewed, validated, and approved by the responsible human. The person submitting the output is accountable for its quality.
Data confidentiality is non-negotiable
No client data, proprietary code, or sensitive information may be submitted to public AI tools without CTO approval and a signed Data Processing Agreement. This applies regardless of how data is framed.
Transparency in AI usage
Team members must disclose AI usage to their lead when AI made a substantial contribution to a deliverable. AI-generated content must not be represented as entirely human-authored in client deliverables or formal submissions.
Approved tools only
Only tools listed in the approved tools section of this policy may be used for company or client work. Use of unapproved AI tools on company hardware for company purposes constitutes a policy violation.
03 Approved AI tools
🤖
Claude (claude.ai)
APPROVED — All roles
Primary AI assistant for all roles
No client data in free tier. Use Claude Pro/Teams for sensitive work. Preferred for all text, code, and analysis tasks.
💻
Claude Code (terminal)
APPROVED — Dev, TL
Agentic coding in terminal
For local/sandbox repos only. Not production codebases without lead approval. Review all outputs.
GitHub Copilot
APPROVED — Dev, TL, QA
Inline code suggestions
Review all AI-generated code before committing. Do not commit code you cannot explain. Works within existing IDE.
🔷
Microsoft 365 Copilot
APPROVED — PM, DM, TL
Word, Excel, Teams, Outlook
Integrated into Microsoft 365 apps. Review all AI-generated content before sharing externally.
🟡
ChatGPT (OpenAI)
RESTRICTED — All roles
Non-confidential tasks only
May be used for public/generic tasks only. No client data, no proprietary code, no internal project names.
🔵
Google Gemini
RESTRICTED — PM, DM
Awareness and planning only
Replaces Bard. Suitable for non-sensitive planning tasks. No source code or client documents.
🤗
HuggingFace hosted models
RESTRICTED — Dev, DB (Sr.)
Research and prototyping only
Must not be connected to production systems. For experimentation and learning only.
🚫
Unapproved AI tools
PROHIBITED — All roles
Browser extensions, AI plugins, unknown tools
All AI tools must be approved by the CTO before use on company or client work. No exceptions.
Approved tool list reviewed quarterly by the CTO. New tools require security review and DPA assessment before approval.
04 Usage rules by activity
ActivityStatusGuidance
05 Role-specific guidance
Project Managers & Delivery Managers
  • AI may draft requirements, user stories, RAID logs, sprint reviews, and stakeholder comms.
  • All AI-drafted documents must be reviewed for factual accuracy before sharing with clients or leadership.
  • Do not share client names, project details, or budgets with public AI tools.
  • Use PMI GenAI Overview and Claude Cowork as primary productivity tools.
Business Analysts
  • AI may assist with drafting business requirements, process flow diagrams, and gap analysis documents.
  • All AI-generated requirements must be validated against stakeholder intent before sign-off — AI cannot capture tacit knowledge.
  • Do not share client business rules, pricing models, or commercially sensitive data with public AI tools.
  • Use Claude Cowork for structured documentation and Microsoft Copilot Chat for meeting summaries.
Scrum Masters
  • AI may assist with sprint planning drafts, retrospective summaries, and velocity analysis.
  • Do not include individual team member performance data or personal feedback in AI prompts.
  • AI-generated sprint summaries must be reviewed before sharing with stakeholders.
  • Use Claude Cowork for ceremony notes; do not share client Jira data with public AI tools.
Team Leads & Architects
  • Team Leads are responsible for governing AI usage within their team and enforcing this policy.
  • AI-assisted architecture decisions must receive human lead sign-off before adoption.
  • Leads must maintain a record of pilot use cases and AI adoption outcomes for quarterly review.
  • MCP integrations require lead approval before connecting to internal systems.
Developers
  • All AI-generated code must be reviewed line by line before committing. Do not commit code you cannot explain.
  • Use AI for boilerplate, documentation, debugging suggestions, and test generation — not to replace code understanding.
  • Agent-based tools must be approved by Tech Lead before connecting to internal systems.
  • Never submit production connection strings, secrets, or API keys to any AI tool.
Data Engineers
  • AI may assist with pipeline documentation, schema explanation, ETL logic drafting, and NL-to-SQL generation.
  • All AI-generated SQL and pipeline scripts must be tested in a non-production environment before deployment.
  • Do not share real production data, connection strings, or client database names with any AI tool.
  • Use anonymised schemas and sample data when prompting AI for data modelling assistance.
AI Engineers
  • AI Engineers building internal AI tools must document all model choices, prompt strategies, and evaluation methods.
  • Any agent or pipeline that accesses internal systems must be reviewed and approved by the CTO before deployment.
  • LLM outputs used in production systems must be evaluated using agreed benchmarks — never deployed untested.
  • Fine-tuned models trained on company or client data require explicit DPA coverage and CTO sign-off.
QA / Test Engineers
  • AI may generate test cases, edge-case scenarios, and defect summaries.
  • AI-generated test suites must be reviewed for coverage accuracy before inclusion in CI pipelines.
  • Do not share production test data or client test environments with public AI tools.
  • All AI-evaluated test results must be verified by a human before marking as passing.
Database Engineers
  • AI-generated SQL must be reviewed and tested in a non-production environment first.
  • Schema and ETL docs generated by AI must be validated against actual database structures.
  • Do not submit real production schema details, client DB names, or connection strings to any AI tool.
  • Use Building Your Own Database Agent course as the foundation for NL-to-SQL work.
Power BI Developers
  • AI may assist with DAX expression generation, report layout suggestions, and data model documentation.
  • Validate all AI-generated DAX against actual data before publishing reports. Incorrect measures can mislead business decisions.
  • Do not share client Power BI datasets, data model schemas, or report data with public AI tools — use anonymised samples only.
  • AI-generated Power BI insights presented to clients must be reviewed and approved by the lead analyst before delivery.
06 Data classification & AI tool matching
Data typeClassificationAI tools permitted
Generic code snippets / pseudocodePublicAny approved tool — Claude, ChatGPT, Copilot
Internal project code (non-client)InternalClaude (API/Enterprise) or GitHub Copilot only
Client source codeConfidentialOnly tools with signed DPA and enterprise data agreement
Client data / PIIRestrictedPROHIBITED in all public AI tools without explicit DPA
Meeting notes / project namesInternalUse anonymised descriptions. No client names in public AI.
Credentials / API keys / secretsCriticalNEVER submit to any AI tool under any circumstances
07 Incident reporting

Any accidental submission of confidential or client data to an AI tool must be reported to the team lead and CTO within 24 hours. Include: the AI tool used, date/time, description of data submitted, client or project affected, and immediate action taken. Failure to report a known data incident is a separate and more serious policy violation.

08 AI Champions & governance

Each delivery team designates one AI Champion responsible for: monitoring team compliance, logging AI use cases monthly, escalating requests for new tool approvals, and running internal knowledge-sharing sessions. The CTO reviews this policy quarterly.

09 Consequences of policy violation
Violation typeExampleConsequence
MinorUsing unapproved tool for non-confidential taskVerbal warning. Mandatory refresher training.
ModerateSharing client project name in public AIWritten warning. Incident log. Lead review.
SeriousUploading client code/data to public AI toolFormal disciplinary. Client notification may be required.
CriticalMisrepresenting AI output; bypassing security controlsImmediate escalation to HR and CTO. May result in contract action.
Approved by: _________________________  ·  Role: CTO / AI Programme Lead  ·  Date: April 2026  ·  Version 1.0 — subject to quarterly review