SSRN
The research lineage, data sources, backtesting discipline, and cross-market validation evidence that stand behind the Apex G-Score — together with the peer-review trail through which the framework exposes its reproducibility details to external scrutiny.
Not a literature review. The three axes emerged from direct observation of what governance failure actually looks like inside regulatory filings — before it shows up in press releases, and before it enters academic scorecards. The academic genealogy was revisited second, not first.
The G-Score did not originate in a literature review. It originated in filings — specifically, in the filings that a minority shareholder receives only through legal action, across years of direct practice in Korean shareholder litigation.
A consistent pattern emerged from that practice: the failure modes that ended in real value destruction rarely looked like what generic governance scorecards measured. They looked like three separate, independent patterns — not one composite “governance quality.” A company could disclose everything honestly and still be controlled by a single family. It could have a pristine independent board and still funnel value through opaque subsidiaries. Each was a different kind of failure; each required its own detection. That field observation is the first half of the framework.
The second half is academic. The decomposition into transparency, balance of power, and conflict-of-interest risk has clear antecedents in three bodies of research that had, until the G-Score, been kept separate.
But the assembly is different. Traditional governance indices ask compliance-scale questions and aggregate them into a single number. The G-Score reads the failure precursors directly from the regulatory filings that would otherwise require litigation to surface — and it decomposes rather than aggregates, so that the component which will eventually fail is not averaged into the components that will not.
No proprietary access. No surveys. No management interviews. No self-reported governance data. The G-Score is reproducible from records that any diligent analyst can retrieve from the primary regulators. The moat is not the data — it is the decomposition and the calibration that the data is subjected to.
The specific filing-to-variable mappings — which disclosure fields feed which indicator, which signal combinations trigger which category, and the calibration weights applied inside each market — are held within the subscription product. The sources themselves are not proprietary; a competent analyst with time can retrieve every filing referenced. The framework’s contribution is the structure imposed on them and the comparability that structure permits.
Backtesting is trivial to run and trivial to mis-run. A framework optimized on the same data it reports as validation will always look stunning on that data — and disappoint on new data. The four principles below exist to prevent that outcome from ever appearing in our numbers.
Validated across eight live markets, scored on filing data that preceded the observed outcome in every case. The three numbers below are what the public surface describes. Per-market values, fold-level decompositions, and sector breakdowns remain inside peer-reviewed publication and the subscription product.
What the AUC figure means. Area-under-curve is the probability that a randomly selected future-distress firm receives a worse G-Score than a randomly selected non-distress firm. A value of 0.5 is a coin flip; 1.0 is perfect separation. In the markets where the framework has been most deeply calibrated, held-out AUC exceeds 0.9. In newer and proxy-affected markets the figure is lower, but the axis structure continues to separate distress from non-distress firms in every market validated. The pattern of cross-market consistency is, methodologically, the load-bearing result — any single-market peak on its own would not be.
What counts as a distress event. Each market’s validation universe is built from public, documentary outcomes — bankruptcies, delistings, audit-opinion rejections with material going-concern language, enforcement actions, court-documented material embezzlements. A firm is counted once, at the earliest failure-defining filing or court record. Earlier warning signs — price collapse, rating downgrades, trading halts — are not counted as events. They are the market reacting to the filing record, not the filing record itself. Per-market event counts vary because the documentary regimes vary; a single cross-market headline number would require definitional choices that the per-market Coverage pages make explicit rather than collapsing into one figure.
What “locally calibrated, universally compared” means in the numbers. A Grade A in Tokyo and a Grade A in Mumbai reflect the same axis thresholds applied to locally calibrated variables. Held-out accuracy in each market depends on how far that market’s calibration has matured and on the proxy gap for indicators not yet fully parseable from its filings. As calibration deepens, the held-out figures are expected to converge upward. Cross-market comparability is a property of the axis layer throughout; per-market accuracy is a property of the indicator layer and its data maturity.
Peer-reviewed academic publication permits disclosure of sector-level accuracy decomposition, fold-level validation detail, full regression tables, and reproducibility parameters — the kind of evidence a sophisticated reader requires to trust the framework. Individual variable weights, scoring bands, and Kill Switch thresholds remain within the subscription product at every outlet below.
Qualified researchers at accredited institutions may access the full indicator set, scoring bands, and calibration parameters under an academic non-disclosure agreement. Licensing is granted on a per-project basis and does not constitute an institutional subscription. Requests should be directed through the contact pathway with institutional affiliation, research scope, and intended publication outlet.
Methodology is the second of three canonical explanations. The structural definition sits one step upstream; the market-by-market application sits one step downstream.