Hybrid • Saudi-first • Delivery-grade

NABDs.AI Applied AI Hackathon
Build AI agents for non-standard, real-world use cases.

A 72–96 hour sprint to design and prototype deployable intelligence—not demo-only apps. Teams will build AI agent systems for government, enterprise, and smart city outcomes aligned to Vision 2030.

⏱ 72–96 hours 👥 Teams: 1–6 🏁 Prototype → Demo → Pitch 🛡 Responsible AI by design

Why this hackathon is different

Most hackathons reward flashy demos. NABDs.AI rewards operationally plausible systems: governed, testable, and aligned to real constraints (data, security, policy, and adoption).

4
Tracks (agents → governance)
5
Judging dimensions
1
Goal: deployable intelligence

What you’ll ship

A complete submission package designed for real evaluation.

  • Working prototype (agent behaviors demonstrated)
  • Architecture snapshot (data/tools, orchestration, guardrails)
  • Impact statement (who it helps, how it scales)
  • Short demo (live or recorded)
“Not an AI course. A capability engine.”
NABDs.AI applied intelligence philosophy
NABDs.AI Hackathon
Hybrid
Virtual + hubs
1–6
Team size
72–96h
Sprint
Gov-ready
Ethics & controls
Impact
Real-world outcomes
Dates, prizes, and hubs can be published once final partners confirm.

Hackathon tracks

Pick one track—or blend them. Bonus points for multi-agent orchestration and responsible AI built into the architecture.

🤖

AI Agents & Orchestration

Build multi-agent systems that coordinate tasks, tools, and data—reliably and securely.

Examples Policy-aware agents • Tool-using copilots • Multi-step orchestration

🧠

Non-Standard Use Cases

Design AI for messy reality: incomplete data, human workflows, compliance constraints, and edge cases.

Examples Public services • Healthcare decision support • Regulated operations

🧩

Custom Intelligence & Training

Hyper-personalized learning and skill engines that adapt by role, mastery, and operational performance.

Examples Adaptive learning paths • AI role-play sims • GenAI content pipelines

🛡️

Responsible & Ethical AI

Embed governance, auditability, and trust. Build systems that are explainable and policy-aligned.

Examples Guardrails • Audit logs • Bias checks • Human-in-the-loop workflows

What participants get

  • Architecture guidance + office hours
  • Mentorship on agent design, evaluation, and governance
  • Sector framing: government, enterprise, smart city
  • Optional sample datasets and reference patterns

Use any stack you want. We care about impact, design quality, and deployability.

Submission package

  • Prototype (demo the behavior)
  • Architecture card (components + data + tools)
  • Governance notes (controls, audit, safety)
  • Pitch (3–5 min live/recorded)

Bonus: measurable evaluation, test cases, and clear adoption path.

Timeline (example)

A simple structure that supports speed without sacrificing quality.

Day 0 — Kickoff

  • Challenge briefings
  • Team formation
  • Architecture + governance guidance

Day 1–2 — Build

  • Prototype, iterate, test
  • Mentor office hours
  • Checkpoint on evaluation + safety

Day 3 — Finish

  • Finalize demo and pitch
  • Package submission materials
  • Deployability + scaling notes

Judging criteria

We score for real-world viability—not just presentation.

1) Real-world relevance

Clear users, constraints, and outcome definition.

2) Technical depth

Agent design, orchestration, tools, data, and evaluation.

3) Non-standard problem solving

Handles edge cases, ambiguity, and operational constraints.

4) Scalability & adoption

Plausible rollout plan and integration thinking.

5) Trust & governance

Auditability, safety, privacy, policy alignment, HITL.

Prizes & opportunities

Winners can be fast-tracked into pilots, labs, or partner delivery tracks.

🏆

Cash prizes & grants

Prize pool and tiers can be announced once sponsorship is finalized.

🚀

Pilot pathways

Top teams may be invited into real pilots with enterprise/government stakeholders.

🎓

NABDs.AI Academy fast-track

High performers can be invited to advanced cohorts and applied delivery programs.

What “winning” looks like

A solution that works under constraints, demonstrates agent behaviors, includes governance controls, and clearly maps to measurable operational outcomes.

Partners & sponsors

Partner to access top applied AI talent and accelerate adoption through deployable prototypes.

Partner benefits

  • Visibility across Saudi enterprise and government ecosystems
  • Early access to deployable prototypes
  • Co-marketing and speaking opportunities
  • Optional challenge sponsorship (sector-specific)

Ways to participate

  • Sponsor a track or prize
  • Provide mentors and judges
  • Bring a real challenge statement
  • Offer pilot pathways for winners

Contact

To sponsor or submit a challenge statement, contact us and select “Hackathon” in the message.

FAQ

Do I need a team?

No. You can join solo or in a team of up to 6 participants.

Is there a required tech stack?

No. Use any tools/frameworks you prefer. We evaluate impact, architecture quality, and responsible AI posture.

Can we submit a recorded demo?

Yes. Live or recorded demos are accepted, depending on the final event format.

What kinds of problems are “non-standard”?

Problems with real constraints: partial data, policy boundaries, safety/ethics concerns, multi-stakeholder workflows, and operational edge cases.

Ready to build deployable intelligence?

Register interest to receive the final dates, prize pool, tracks, and submission checklist. Partners can also propose challenge statements and sponsor prizes.