The full breakdown of the 100-point evaluation: what each category measures, how points are awarded, and what we weigh in the human review.
Carlos (Bloqarl) - April 30, 2026
Every application is scored on a 100-point scale split across three categories: Product Engagement (40), Social Sharing (15), and Protocol Merit (45). The first two are auto-calculated. The third is human-reviewed. The highest score does not automatically win — final award decisions weigh qualitative factors alongside the number — but a higher score is a strictly better starting position.
This page is the full rubric. Read it before you apply, and read it again before you submit.
Four scoring tasks, ten points each, all auto-verified through the shared platform backend. None are required to apply, but together they signal that you've engaged with our security stack and have a reasonable expectation of the engagement.
The Krait Claude Skill is a structured smart-contract security review you run yourself in Claude Code. It produces a markdown report covering high-likelihood findings, design observations, and gas considerations. Submit the report URL on the application; we verify it loads and references your protocol.
What we want: a real run on your real codebase. Don't run it on a tutorial repo just to earn points — the auto-verifier checks that the report content matches the GitHub URL you submit elsewhere on the application.
krait.zealynx.io walks you through a structured pre-audit. The platform issues an assessment ID; submit it on the application form.
What we want: a complete walkthrough, not a partial start. Stop midway and the points don't credit.
Three modules of your choice from Zealynx Academy. Each takes 30–60 minutes; the platform tracks completion. Auto-verified.
What we want: completion, not skipping ahead. Modules track time-on-page and quiz performance. We don't enforce minimum quiz scores, but we do flag accounts that suspiciously click through.
The most involved task. Shadow Audits range from 2 days to 7 days depending on the contest size. They are real audits run on already-completed engagements, with our findings as the answer key. You submit findings; we score against the answer key.
What we want: genuine effort. We don't require a perfect score; we require that you tried. The point of a Shadow Audit isn't to test you — it's to give you real practice writing audit findings, which most founders have never done. Almost everyone learns something.
Three actions, five points each, evidence-verified.
A post on X that:
Submit the URL on the application.
Same pattern, on LinkedIn. The "30+ days, real engagement" check applies.
Each applicant gets a unique referral link on their dashboard. When someone you referred creates an account and submits a complete application, you earn 5 points. Self-referrals don't count. The cap is 5 (one successful referral) — referring 10 people doesn't stack to 50 points.
This is the main event. Read it carefully.
Is this a codebase we can audit?
| Score | What that looks like |
|---|---|
| 10–12 | Clean repo, working build, meaningful test coverage, fuzz harnesses or invariants where they make sense, clear scope. |
| 7–9 | Working build, some tests, scope is clear but maybe one module needs trimming. |
| 4–6 | Build works only sometimes, light tests, scope is fuzzy. We'd need to scope-call before we could quote. |
| 0–3 | No clean build, no tests, scope undefined. Not really auditable yet. |
Will this protocol still exist in 6 months? An audit on a protocol that doesn't ship is wasted budget for both of us. Signals we look for: testnet or mainnet deployment, design-partner protocols, prior funding rounds, evidence of external traction (waitlists, beta users, integration commitments).
This is not a "is your idea good?" judgment. We're not VCs. It's "is there evidence this team will ship and stay alive?"
What changes for users in the four weeks after the audit ships?
At least one verifiable public identity is a hard eligibility requirement, not a scoring one. The 10 points here come from beyond the minimum:
This is the most subjective category. We're trying to read "would this team ship a fix in 48 hours if we found a critical?" — that's the underlying question.
A few things deliberately not in the rubric:
If you're not selected, the score we compute is yours. Reapply in a future season; many of the inputs (eMBA modules, Krait runs, prior Shadow Audit work) carry over.
Email contact@zealynx.io. We try to respond within a business day.