← Back to Blog

Security Metrics We Think Every Platform Should Publish (And How to Read Them)

8 min read
Published: September 6, 2025
Category:Security

Warning About Vanity Metrics

This article begins with a warning: not all metrics are created equal. Vanity metrics—numbers that look good but don't reflect real security posture—are worse than no metrics at all. They create false confidence, obscure risk, and mislead users and institutions evaluating platform security.

Vanity metrics are numbers that look reassuring but fail to measure security outcomes. For example, raw bug counts can mean either thorough testing or poor engineering; uptime can look excellent even if the downtime was caused by a breach; tool counts measure procurement rather than effectiveness; and audit counts say nothing about whether findings were addressed.

They are dangerous because they create false confidence and misaligned incentives. Teams can optimize for presentation-friendly numbers while real risks accumulate silently—especially if monitoring and incident response are deprioritized.

What matters instead are metrics that map to real questions: how often incidents occur, how quickly they are detected and contained, and how consistently controls enforce policy under pressure.

A good rule: if a metric cannot change an operational decision, it probably should not be the headline metric.


Why Metrics Matter for Trust

Trust in Web3 platforms is often built on promises: “We’re secure,” “We follow best practices,” “We’ve never been hacked.” Those claims are difficult to verify and easy to misunderstand. They rarely describe how the platform behaves when something goes wrong.

Metrics change that. They turn trust into something measurable, comparable, and verifiable. When a platform publishes security metrics, it's making a claim that can be checked against reality over time.

But metrics are only useful if they measure the right things, are calculated correctly, and are interpreted honestly. Vanity metrics—numbers that look good but don't reflect real security posture—are worse than no metrics at all. They create false confidence and obscure risk.

The most useful public reporting answers three fundamentals: how often security incidents happen (and how severe they are), how quickly detection and containment occur, and how effectively controls prevent abuse. When published with definitions and context, these metrics become comparable over time.

These metrics, when published with context and honesty, enable users and institutions to make informed decisions about platform risk.


Which Metrics Matter (And Which Don't)

Not all security metrics are created equal. Some measure real security posture. Others measure activity that may or may not correlate with security outcomes.

Metrics That Matter

Start with outcomes. Incident rate should be reported by severity so readers can distinguish between nuisance events and user-impacting failures.

Detection speed matters because it defines how long attackers can operate. MTTD measures how quickly the platform notices abnormal behavior—not whether prevention was perfect.

Containment and recovery are the next signal. MTTR measures how quickly incidents are mitigated, resolved, and returned to a known-good state.

Authentication metrics show whether accounts are under pressure and whether defenses are working without breaking usability. Failure rates are most meaningful as trends, paired with notes on product changes.

Lockouts and step-up challenges are a proxy for abuse detection. Too many can indicate noisy policies; too few can indicate weak detection. The point is calibration and trend clarity.

Rate limit triggers indicate automated abuse and scraping pressure. Reporting by endpoint or category can show where the platform is being stressed without revealing exact thresholds.

Unauthorized access attempts and authorization failures indicate whether boundaries are being probed and whether policy is being enforced consistently.

Compliance violations (reported as aggregated outcomes) show whether policy is being applied in practice. These metrics should focus on enforcement and trend direction, not sensitive individual details.

Metrics That Don't Matter (Or Mislead)

Raw totals without definitions are usually noise. “Total security events” often measures log volume, not risk, and can be inflated or reduced by instrumentation choices.

Availability and tooling are context, not posture. Uptime can be high while security is poor, and tool counts often reflect budgets rather than disciplined operations.

“Zero incidents” and “many audits” can mislead without context. The important questions are whether detection is effective and whether audit findings are tracked to remediation.


Which Metrics Are Safe to Publish (And Which Aren't)

Not all metrics should be published publicly. Some metrics provide attacker value that outweighs transparency benefits, while others can be published safely to build trust.

Public metrics should be aggregated and time-bounded. Incident counts by severity, MTTD/MTTR in broad ranges, and high-level enforcement rates can build trust without revealing precise control thresholds.

Avoid publishing attacker-enabling detail: vulnerability specifics, exact detection thresholds, real-time event feeds, infrastructure inventories, or playbooks that describe containment steps.

The safest pattern is historical reporting with clear definitions—enough transparency to evaluate posture, without turning the report into a roadmap for bypass.

A mature publication model balances trust and safety: transparent enough to be meaningful, constrained enough to avoid leaking attacker value.


Metrics Without Response Are Meaningless

Metrics are only valuable if they lead to action. Publishing metrics without demonstrating that they inform response, drive improvements, or enable decision-making creates false confidence without real security benefit.

Metrics should drive action. If incident rate rises, the platform should be able to explain why and what changed. If detection time drifts upward, the platform should show what is being improved in monitoring and triage.

Metrics also need to map to product decisions. If authentication failures spike after a UX change, fix the UX. If lockouts are noisy, recalibrate policies. If abuse pressure rises, harden endpoints and improve rate limiting. Numbers without decisions are reporting theater.

The credibility signal is follow-through: clear remediation, documented changes, and trend improvement that readers can verify over time.

When metrics are paired with visible remediation, they become evidence of operational maturity—not just reporting.


Security + Compliance Metrics as a System

Security metrics don't exist in isolation. They form a system that, when viewed together, provides a more complete picture of platform security posture.

A useful way to think about metrics is by control surface. Authentication metrics describe account pressure and takeover attempts. Authorization metrics describe boundary enforcement. Abuse-prevention metrics describe automated traffic and exploitation attempts. Compliance metrics describe whether policy is enforced consistently. And system-health metrics provide context for whether the platform can operate reliably while controls are firing.

Collection should be consistent and structured: events recorded with type, severity, and time window so trends are comparable. The goal is to make analysis possible without requiring readers to guess what changed.

High-severity events should be distinguishable from background noise so teams can triage quickly and reviewers can understand what “serious” means under the platform’s definitions.

By viewing these metrics together, we can identify patterns: Are auth failures increasing? Are rate limits being triggered more frequently? Are compliance violations trending down? These patterns inform both security posture and operational improvements.


Publishing Without Leaking Attacker Value

Publishing security metrics creates a tension: transparency builds trust, but detailed metrics might provide attackers with information about platform defenses, detection capabilities, and response times.

The key is publishing metrics that demonstrate security posture without revealing operational details that attackers could exploit.

Publish aggregate incident rates, broad detection and resolution times, and high-level enforcement outcomes. Share enough to evaluate posture, but not enough to infer thresholds or bypass paths.

Do not publish specific detection thresholds, containment playbooks, infrastructure details, tool configurations, or real-time event feeds. Those details can be operationally useful to attackers.

The goal is to demonstrate security maturity and operational discipline without providing a roadmap for attackers. Published metrics should answer "How secure are you?" not "How do we bypass your security?"

A practical default is monthly publication with definitions, calculation methods, and honest limitations—paired with careful aggregation.


How to Read Metrics (And Avoid Being Fooled)

Metrics are only useful if they're read correctly. Misinterpreting metrics can lead to false confidence or unnecessary concern.

A single month's metrics are less meaningful than trends over time. Is the incident rate increasing or decreasing? Are response times improving or degrading? Trends reveal whether security is improving or deteriorating.

Understand Context

Metrics without context are misleading. A high authentication failure rate might indicate strong controls (blocking brute force) or weak UX (users struggling to log in). A low incident rate might indicate good security or poor detection. Always ask: "What does this number actually mean?"

Compare Apples to Apples

Different platforms calculate metrics differently. Before comparing platforms, ensure you understand how each platform defines and calculates its metrics. A platform that defines "incident" broadly will have higher incident rates than one that defines it narrowly.

Watch for Vanity Metrics

Some metrics look impressive but don't reflect real security posture. "Zero incidents" might mean perfect security or no detection. "100% uptime" doesn't measure security. "10 security audits" doesn't guarantee security. Always ask: "Does this metric actually measure security, or does it just look good?"

Understand What Metrics Don't Tell You

Metrics reveal what happened, not what could happen. A platform with low incident rates might be secure or might be untested. A platform with high incident rates might be insecure or might have excellent detection. Metrics are evidence, not proof.

Look for Honesty

Platforms that publish metrics honestly—acknowledging limitations, explaining context, discussing failures—are more trustworthy than platforms that only publish good news. Honest metric publication demonstrates maturity and accountability.


Continuous Improvement Loop

Metrics are not just for transparency—they're tools for improvement. Effective platforms use metrics to identify weaknesses, measure improvement, and demonstrate progress.

The improvement loop is simple: measure, analyze, change, and measure again. Metrics help teams spot trends early, validate whether controls actually reduced abuse, and prioritize work based on impact rather than assumptions.

This continuous improvement loop—measure, analyze, improve, measure again—is what separates mature security programs from reactive ones. Metrics enable this loop by providing objective evidence of security posture and improvement.

But metrics alone aren't enough. They must be paired with honest analysis, effective response, and transparent communication. A platform that publishes metrics but doesn't act on them is not improving—it's just reporting.


Building Trust Through Transparency

Publishing security metrics is an act of transparency that builds trust. But transparency without honesty is worse than no transparency at all.

Effective publication includes clear definitions, time windows, and honest context: what the metric means, how it is calculated, what it does not capture, and how readers should interpret changes over time. Regular updates and historical data allow trend evaluation.

Transparency builds trust when it includes limitations and learnings, not just good news. Publishing metrics is meaningful when readers can see both posture and improvement.

This transparency demonstrates operational maturity and accountability. It shows that we take security seriously enough to measure it, analyze it, and be honest about it. That honesty, more than any single metric, is what builds trust.

That is how trust is built through transparency.

That is how platforms demonstrate accountability.

This is how we Become Alpha.