Current Status
Price
Get Started
EITC/AI/AIF Artificial Intelligence Fundamentals is the European IT Certification programme on modern foundations of artificial intelligence, including machine learning, deep learning, generative AI, AI project lifecycles, responsible AI, and practical AI workflows.
The curriculum of the EITC/AI/AIF Artificial Intelligence Fundamentals focuses on conceptual and practical competencies in understanding, evaluating, and responsibly applying AI systems, organized within the following structure, encompassing comprehensive and structured EITCI certification curriculum self-learning materials supported by referenced open-access video didactic content as a basis for preparation towards earning this EITC Certification by passing a corresponding examination.
Artificial intelligence (AI) is a broad field focused on building systems that can perform tasks we consider “intelligent”, such as perception, prediction, decision-making, and content generation. In modern practice, AI includes both rules/workflows and learning-based approaches. A core practical distinction is between predictive systems that output decisions (labels/scores/forecasts) and generative systems that output new content (drafts, summaries, images, audio, video). Successful AI in real deployments depends on more than “model training”: it requires correct framing, reliable data, meaningful evaluation, safe deployment, and continuous monitoring.
AI methods are used across a wide range of applications, from anomaly detection and forecasting, through computer vision and recommendations, to large language models (LLMs) that support writing, analysis, and knowledge work. Because many AI outputs are probabilistic (they can be fluent but wrong), practical AI competence includes understanding failure modes (hallucinations, bias, drift), using grounding and evaluation methods (e.g., retrieval-augmented generation, metrics, tests), and applying governance and safety practices (privacy-by-design, security, compliance and human oversight).
In practice “AI” is an umbrella that includes multiple families:
- Rules/automation: deterministic workflows, scripts, and business rules used when behaviour must be exact and auditable.
- Machine learning (ML): learning patterns from examples (data) to output decisions such as labels, scores, forecasts, or rankings.
- Deep learning (DL): ML using neural networks with many layers, especially effective on unstructured data (text, images, audio) and large-scale datasets.
- Generative AI (GenAI): models that generate new content (text/images/audio/video/code) and therefore require additional evaluation and safeguards.
A typical AI project follows an end-to-end lifecycle:
- Framing: define input → output, user action, constraints, and success criteria (metrics).
- Data & labels: collect and label examples; manage data quality, leakage, imbalance, and drift; choose a proper train/validation/test split (“practice exam vs final exam”).
- Training/configuration: fit models or configure GenAI workflows; tune on validation, not on the test set.
- Evaluation: interpret metrics (confusion matrix, thresholds, decision costs) and perform error analysis.
- Deployment & monitoring: integrate into workflows with safety guardrails; monitor drift, reliability, and impacts over time.
In GenAI systems, reliability often improves with retrieval-augmented generation (RAG), where answers are grounded in trusted documents and can be accompanied by citations or traceable sources.
Ethical, legal, and secure AI are integral to modern practice. This includes understanding bias and fairness, explainability concepts, privacy and data governance (GDPR principles such as minimization and retention, DPIA intuition), AI security and red teaming, and risk-based compliance frameworks (including the EU AI Act). The curriculum also addresses applied AI toolkits in professional roles (research, automation, analysis, operations, evaluation, governance) and modern agentic systems (single agents and multi-agent swarms) with human-in-the-loop safety controls.
Modern AI systems can fail in ways that traditional software typically does not. Therefore, practical AI competence includes:
- Evaluation and QA: selecting appropriate metrics, building small test sets (“golden sets”), and performing regression testing for updates.
- Grounding and trust: using RAG, citations, and source verification to reduce hallucinations.
- Privacy and governance: removing personal or confidential data before using tools, applying retention rules, and documenting risk decisions.
- Security mindset: protecting systems against prompt injection, data leakage, and adversarial manipulation; applying approval gates for sensitive actions.
- Operationalization: repeatable workflows, monitoring loops, logs, and cost controls (including token-aware practices and FinOps principles).
The curriculum culminates with strategic synthesis: choosing the simplest effective AI approach (prompting vs RAG vs agents), establishing safe operating procedures, and building future-proof skills for ongoing adaptation as AI tools evolve rapidly.
To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.
The EITC/AI/AIF Artificial Intelligence Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Participants can access answers and ask more relevant questions in the Questions and answers section of the e-learning interface under currently progressed EITC programme curriculum topic. Direct and unlimited consultancy with domain experts is also accessible via the platform integrated online messaging system, as well as through the contact form.
For details on the Certification procedure check How it Works.
Curriculum Reference Resources
AI fundamentals (modeling, evaluation, deployment)
https://developers.google.com/machine-learning
Scikit-learn (classic ML baselines + metrics)
https://scikit-learn.org/
PyTorch (deep learning framework)
https://pytorch.org/
Google TensorFlow (deep learning framework)
https://www.tensorflow.org/
Hugging Face Transformers (LLMs + NLP tooling)
https://huggingface.co/docs/transformers
OpenAI API documentation (LLMs, tools, agents patterns)
https://platform.openai.com/docs
Google Gemini API documentation (multimodal + long context)
https://ai.google.dev/gemini-api
Anthropic documentation (tool use + MCP)
https://docs.anthropic.com/
RAG & knowledge management (vector search + document processing)
https://www.pinecone.io/learn/
Weaviate documentation (vector database)
https://weaviate.io/developers/weaviate
Unstructured documentation (document parsing & chunking pipelines)
https://docs.unstructured.io/
LangGraph (agent orchestration graphs)
https://langchain-ai.github.io/langgraph/
Evaluation, tracing & QA for LLM systems
https://docs.ragas.io/
LangSmith documentation (tracing + evaluations)
https://docs.smith.langchain.com/
Arize Phoenix (observability + evals)
https://phoenix.arize.com/
Responsible AI, security & governance
https://eur-lex.europa.eu/
GDPR (Regulation (EU) 2016/679 on EUR-Lex)
https://eur-lex.europa.eu/eli/reg/2016/679/oj
EU AI Act (Regulation (EU) 2024/1689 on EUR-Lex)
https://eur-lex.europa.eu/eli/reg/2024/1689/oj
NIST AI Risk Management Framework (AI RMF)
https://www.nist.gov/itl/ai-risk-management-framework
OWASP Top 10 for LLM Applications (LLM security risks)
https://owasp.org/www-project-top-10-for-large-language-model-applications/
MITRE ATLAS (Adversarial Threat Landscape for AI)
https://atlas.mitre.org/

