Behavior first
NullCS studies how players behave inside real encounters. The emphasis is on measurable structure around timing, visibility, movement pressure, and shot process.

NullCS studies whether suspicious behavior in Counter-Strike 2 demos can be surfaced in a way that stays measurable, reviewable, and honest about uncertainty. The goal is not to act like the problem is solved. The goal is to make serious progress on a difficult review task without hiding behind black-box language.
NullCS studies how players behave inside real encounters. The emphasis is on measurable structure around timing, visibility, movement pressure, and shot process.
Blatant aimbot or spinbot footage is usually the easy case. The more important challenge is lower-visibility aim assist, recoil assist, and information advantage that does not scream on first glance.
Outputs are meant to stay understandable enough that a reviewer can inspect the signal instead of trusting a black box, and the project does not present itself as universal cheat detection.
The first three numbers are not cheat probabilities. They summarize how loud the strongest player signal looks inside each benchmark slice. Median top-ranked signal means: for each demo, take the highest-ranked player in the lobby, then look at the middle value across that group of demos.
So a suspicious median of 0.748 versus 0.0073 on held-out normal legit demos means the suspicious slice is surfacing much more strongly, while the legit slice stays almost pinned near zero. The pro value staying at 0.0073 matters for the same reason: strong legitimate players are not being inflated just because they are skilled.
For a review system like NullCS, the shape is more important than the raw magnitude. You want suspicious benchmark demos to be visibly louder, while normal legit and pro stress-test demos stay quiet. If all three slices were high, the system would be noisy. If all three were low, it would not be useful.
The 0.90 top-3 retrieval number answers a different question: how often does a labeled suspicious player appear somewhere in the top three ranked players for a suspicious benchmark demo? That matters because NullCS is framed as triage and review support. The goal is to reliably surface the right players near the top of the lobby, not to claim that one score is a final verdict.
NullCS works from structured Counter-Strike 2 demo data. The current public stack builds 449 player-level engineered features and deeper encounter-level timing and process channels to study how suspicious behavior actually unfolds.
Some demo metrics can make blatant abuse look obvious, but that is not always enough to settle the case. The harder problem is when strong legitimate play and lower-visibility cheating begin to overlap.
The models are there to rank, organize, and explain suspicious behavior. They are useful when they surface the right players near the top of a lobby while staying quieter on strong legitimate and pro-level slices.
NullCS is a review-support system, not a kernel anticheat and not a universal detector. The current work is serious progress, but the research is still ongoing in a difficult and dynamic environment.
NullCS is being built by Gerry Jones, Jr., who holds a B.S. in Applied Mathematics and is currently pursuing a master's degree in Data Science. The project started after returning to Counter-Strike in April 2025 and repeatedly running into blatant abuse, including aimbotting, triggerbotting, and spinbotting, with the obvious question of why some of those cases appeared to move through the ecosystem without meaningful response.
That turned into a research problem rather than a complaint. The goal was not to build an anticheat or market a magical detector. The goal was to investigate whether suspicious behavior in demos could be surfaced more systematically, including both the obvious cases and the harder ones: aim assist, recoil assist, and information abuse that often try to stay just subtle enough to blend into strong legitimate play.
Building that required repeated iteration: sourcing demo data, manually pulling and labeling SteamIDs for training, testing different feature and modeling strategies, and spending time inside cheat communities to better understand how players discuss avoiding bans and staying below the threshold of current systems such as VAC. There is still substantial work ahead, but the current state of NullCS already reflects meaningful progress.
Blatant abuse can be loud in the metrics, but that is only one part of the review problem.
The harder cases are subtle aim assist, recoil assist, and information abuse that try to stay close to strong legitimate play.
NullCS is still under active research. The current state reflects real progress, not a claim that the problem has been solved.