Access-Controlled Randomness in TFT: Unlockable Champions and the Structural Logic Behind Patch 16.3

Introduction

In earlier seasons of Teamfight Tactics, the card pool could be modeled in a relatively clean way as a shared and static random resource. Champions were preloaded into a global pool, access to that pool was conditioned primarily on player level, and strategic interaction emerged almost entirely through inventory competition. Within this framework, most balance issues could be addressed locally by tuning pool sizes or appearance probabilities, without altering the structure of the system itself. The introduction of unlockable champions in the current season does not fit comfortably into this model, and treating it as a conventional probability adjustment risks missing what has actually changed.

At first glance, the mechanic appears straightforward. Players satisfy certain conditions, unlock specific champions, and those champions may then appear in their shops. Framed this way, the system looks like a simple expansion of available options. However, this interpretation becomes problematic as soon as one asks a more precise question: are these champions being added to the card pool, or is access to an existing pool being selectively granted? The distinction matters. Directly injecting dozens of new champions into a shared pool would create severe dilution effects, undermine competitive symmetry, and scale poorly across future seasons. For these reasons alone, a naive “add to pool” interpretation is difficult to reconcile with a stable long-term design.

The more coherent reading, and the one adopted in this post, is that the card pool itself remains conceptually intact, while the rules governing how individual players access it have been restructured. Unlocking a champion does not primarily change what exists in the pool, but how that champion is weighted, filtered, and surfaced during shop generation for a given player. In this sense, the innovation is not a new set of cards, but a new access layer that sits between players and a shared resource. The sections that follow focus on this layer, examining how unlockable champions and recent adjustments to four- and five-cost units can be understood as consequences of a single underlying architectural choice rather than isolated balance patches.


The Access Control Layer in System Abstraction

In the classical TFT model, the card pool can be abstracted as a shared finite multiset:

\[\mathcal{C} = {(c_1, n_1), (c_2, n_2), \dots}\]

Here, \(c_k\) denotes a specific champion, while \(n_k\) denotes the remaining quantity of that champion in the pool.

When a player $ i $ refreshes their shop at level \(\ell_i\), the operation can be described as a conditional random sampling process:

\[\text{Shop}_i \sim \text{Sample}\big(\mathcal{C} \mid \ell_i\big)\]

The conditioning occurs exclusively at the cost-tier level, through predefined level probability tables. Individual champions within the same cost tier are otherwise symmetric.

The essential property of this model is that all players operate within the same probability space. Differences in outcomes arise only through inventory depletion caused by other players’ purchases.

Attempting to introduce unlockable champions directly into the model above leads to an immediate structural conflict. Unlock state \(U_i\) is a player-specific variable, while the sampling space \(\mathcal{C}\) is global and shared. Without an additional abstraction layer, these two elements cannot be composed in a coherent way.

This is precisely why two seemingly obvious approaches fail at the system level. Adding unlocked champions directly into the global pool undermines probability stability. Creating fully independent pools per player eliminates the competitive interaction that defines the game.

In short, the traditional model lacks the expressive capacity required to represent player-specific access constraints.

To preserve both a shared card pool and individualized unlock states, the system must introduce an intermediate layer. The pool itself cannot be sampled directly. Instead, what is sampled is a player-specific method of accessing the pool. I will refer to this layer as an access control layer. The pool remains global, but the path through which each player interacts with it becomes conditional.

Under this revised abstraction, a global shared pool still exists:

\[\mathcal{C}_{\text{global}}\]

However, the shop presented to player \(i\) is generated by:

\[\text{Shop}*i \sim \text{Sample}\big(\mathcal{C}*{\text{global}},; w_i(\cdot)\big)\]

Here, \(w_i(c) \ge 0\) is a player-dependent weight function. It determines both whether a champion is visible to the player and how frequently it appears. This reframing leads to a crucial distinction. Unlocking does not alter the contents of the card pool. Instead, it modifies the weighting applied during sampling.

For each player \(i\), define an unlock set:

\[U_i \subseteq \mathcal{C}_{\text{global}}\]

The weight function can then be expressed as:

\[w_i(c)= \left\{ \begin{aligned} 0, &\quad \text{if } c \text{ is unlockable and } c \notin U_i \\ \alpha_i(c), &\quad \text{if } c \in U_i \\ 1, &\quad \text{if } c \text{ is a standard champion} \end{aligned} \right.\]

The term \(\alpha_i(c)\) represents a dynamically adjusted parameter. As will be discussed later, it supports mechanisms such as probability decay for ignored champions, compensation under competition, and suppression of uncontrolled three-star acquisition.

Viewed from an implementation perspective, a shop refresh can be described schematically as follows:

function refresh_shop(player i):
    candidates = global_card_pool.remaining()
    weights = []

    for card c in candidates:
        weights[c] = base_weight(c, level_i) * access_weight(i, c)

    shop = weighted_sample(candidates, weights, slot_count)
    return shop

The function base_weight corresponds to the traditional level-based cost distribution. The function access_weight represents the newly introduced interface.

This design choice is important. All new mechanics are implemented through adjustments to access_weight, while the underlying pool structure remains untouched.

Official descriptions state that unlocked champions appear in the rightmost shop slot upon refresh. At a system level, this is unlikely to be a purely visual decision.

A more plausible interpretation is that shop generation is divided into multiple sampling stages. At least one slot is drawn from a subspace governed by a distinct weighting scheme, dedicated to unlockable champions. This interpretation aligns naturally with the access control abstraction and will be examined in detail in the next chapter.


Implementation of Unlockable Champions

When a player unlocks a champion, nothing happens to the global card pool. No inventory is added. No card instance is created. No private copy of the pool is spawned. From the system’s perspective, the only event is that the player is now permitted to reference an entity that already exists. This makes unlocking closer to an access control update than to a modification of game data.

For each player \(i\), the system maintains a set

\[U_i \subseteq \mathcal{C}_{\text{unlockable}},\]

representing the unlockable champions that the player is allowed to access.

At the implementation level, this is likely no more than a set of identifiers:

player.unlocked_champions = Set<champion_id>

The unlock operation itself is therefore a pure state update:

function unlock(player i, champion c):
    i.unlocked_champions.add(c)

Crucially, this operation has no immediate gameplay effect. The behavioral change only manifests at the next shop refresh.

The official description emphasizes that an unlocked champion appears in the rightmost shop slot upon the next refresh. A natural and low-coupling implementation would separate the process as follows:

  • Standard slots are sampled from the regular card pool, using the traditional cost-tier probabilities.
  • A dedicated unlockable slot is sampled from the subset of unlocked champions, using a separate weighting scheme.

In pseudocode, this could be expressed as:

function refresh_shop(player i):
    shop = []

    shop += sample_standard_pool(i, standard_slots)
    shop += sample_unlockable_pool(i, unlockable_slot)

    return shop

This structure has several desirable properties. Existing logic remains untouched. The unlockable system can be inserted or removed as a module. The visual layout of the shop aligns directly with its underlying semantics.

One of the most technically revealing rules is the following: if a player unlocks a champion but repeatedly ignores it, the probability of seeing that champion decreases, down to a floor of 20 percent. To implement this rule, the system must maintain, for each player–champion pair, a state variable

\[s_i(c) \in [\beta, 1], \quad \beta = 0.2.\]

Here, \(s_i(c)\) represents the current weight modifier for champion \(c\) when sampled for player \(i\). The variable initializes at 1, decreases monotonically with consecutive non-purchases, and resets to 1 upon purchase.

Combining this with the abstraction from the previous chapter, the effective weight becomes

\[w_i(c) = w_{\text{base}}(c, \ell_i) \cdot s_i(c),\]

where \(w_{\text{base}}\) encodes the standard level-dependent cost probabilities, and \(s_i(c)\) acts as a behavior-conditioned correction term.

From an engineering standpoint, this structure is unusually clean. The base probabilities remain stable. Behavioral feedback is isolated in a multiplicative factor. The mechanism can be tuned, extended, or disabled without rewriting the core sampling logic.

Without such decay, unlocked champions would permanently pollute the shop, even when the player has no intention of using them. The system would repeatedly surface options that the player has already rejected.

With decay in place, the shop gradually converges toward the player’s revealed preferences. The system infers intent implicitly, without requiring any explicit declaration. In effect, this is a weakly adaptive random process.

Another point that invites confusion is the statement that unlocked champions are drawn from a shared pool when multiple players unlock the same unit. This implies a strict separation of responsibilities: The inventory is global. The probabilities are local.

Let \(N_c\) denote the global remaining count of an unlockable champion \(c\).

When any player draws \(c\), the value of \(N_c\) decreases by one. However, the probability that \(c\) appears in a given player’s shop is governed by that player’s weight function \(w_i(c)\). This results in an asymmetric access model to a shared resource.

If probabilities were shared but inventories were private, competition would effectively disappear. Card denial, pool reading, and all higher-order interactions around contested units would collapse.

Once this architecture is in place, a issue becomes unavoidable. If a high-cost champion is unlocked by only a single player, and its inventory faces no competition, the system risks becoming overly permissive. Given enough time, a three-star outcome approaches certainty. The next chapter addresses how this problem is resolved, using the same architectural principles rather than ad hoc fixes.


Conditional Correction in Patch 16.3

In Patch 16.3, Teamfight Tactics introduced an adjustment to the behavior of unlockable champion:

  • For unlockable 4-cost and 5-cost champions, it has become harder to reach 3-star when there is no competition, meaning only a single player has unlocked the champion.
  • When there is competition, defined as at least one other player unlocking the same champion, reaching 2-star becomes easier, while reaching 3-star remains difficult.

Among players, this change has often been interpreted in practical terms. Common explanations include the claim that the developers no longer want players to force 3-star 4-cost champions at level 8, or that solo unlock strategies were quietly nerfed through probability adjustments. Viewed through the architectural framework established in the previous chapters, however, this interpretation misses the underlying structure. The 16.3 update is not an isolated balance patch.

The unlockable champion system implicitly assumes that access to a champion will, in most games, be shared by multiple players. It follows directly from two design commitments discussed earlier: inventory remains global, and competitive pressure is preserved through shared depletion.

In high-level play, a different pattern quickly emerged. A single player would unlock a specific 4-cost or 5-cost champion, while other players deliberately avoided unlocking it. The result was a prolonged level 8 refresh strategy, with no effective competition for inventory. Over time, the randomness normally enforced by shared access was averaged out. Under these conditions, reaching 3-star became increasingly close to a certainty.

From a systems perspective, this is not simply a strong strategy. It is an indication that the access control layer is being pushed into a state it was not designed to reward. The card pool remains shared in name, but its effective behavior collapses into near exclusivity.

Before Patch 16.3, the probability of a player \(i\) seeing a given champion \(c\) could be abstracted as:

\[\mathbb{P}_i(c) = f(\ell_i, \text{cost}_c) \cdot s_i(c)\]

Here, \(\ell_i\) denotes player level, and \(s_i(c)\) captures behavioral modifiers such as the decay applied when an unlocked champion is repeatedly ignored. Crucially, this formulation does not encode whether other players have access to the same champion.

The 16.3 update introduces a structural change rather than a numeric tweak. The probability model now depends on an additional input:

\[\mathbb{P}_i(c) = f(\ell_i, \text{cost}_c, n_c) \cdot s_i(c)\]

The new variable \(n_c\) represents the number of players who have unlocked champion \(c\). This change alters the signature of the probability function itself. In implementation terms, it requires the system to explicitly query global unlock state and incorporate it into local sampling logic.

Consider the case where \(n_c = 1\). A single player holds access to a high-cost champion, while inventory depletion proceeds unopposed. Under these circumstances, the stochastic element of the card pool is weakened by repetition over time. The system no longer tests judgment under uncertainty, but patience under certainty.

From a design standpoint, this outcome is undesirable. It shifts optimization away from decision-making and toward mechanical persistence. The system’s response in 16.3 is therefore to reduce access weight in this specific branch, not by shrinking inventory or enforcing hard caps, but by lowering the effective sampling rate.

At first glance, the second part of the update appears counterintuitive. If another player is contesting the same champion, why should reaching 2-star become easier rather than harder? The answer lies in the system’s implicit target states. In practice, high-cost champions are designed around two distinct milestones. Two-star represents a stable and intended outcome. Three-star is meant to be exceptional, requiring significant cost and meaningful interaction.

When \(n_c \ge 2\), the system returns to a familiar competitive regime. Inventory pressure is real, and outcomes depend on timing and choices rather than isolation. Under these conditions, the system can safely increase the likelihood of reaching the intermediate state without undermining its broader incentives. Three-star remains constrained by inventory limits and cumulative probability decay.

It is tempting to frame the 16.3 changes as a punishment aimed at specific player behavior. That framing, I would argue, is misleading. The system does not evaluate whether a strategy is correct or incorrect. It responds to what the strategy optimizes.

Before 16.3, solo unlock combined with extended refreshing was implicitly rewarded. After 16.3, the same behavior is recognized as an abnormal access pattern and adjusted accordingly. Competition, by contrast, is treated as a stabilizing signal that allows probabilities to behave more traditionally.




    Enjoy Reading This Article? Here are some more articles you might like to read next:

  • A Quick Mental Model for Estimating LLM GPU Memory Use
  • Designing a Maintainable Replay Buffer in RL Systems
  • Tracing the Root Cause of Missing GPUs in Docker Containers
  • Running dm-control on a Headless Server: A Complete Debugging Log
  • Why SUMO’s Rendered Videos Should Never Be Used as RL Training Data
  • Re-running an RL Experiment and Getting a Different Answer
  • Using Local v2rayN Proxy for Cloud Servers via SSH Reverse Tunnel