AI model
Human Mimicry 9.2V
6
6
Review
~25

An LLM predicts the next word based on probability distribution. Let ๐‘ƒ ( ๐‘ค 1 , ๐‘ค 2 , โ€ฆ , ๐‘ค ๐‘ ) P(w 1 โ€‹ ,w 2 โ€‹ ,โ€ฆ,w N โ€‹ ) be the probability of a sequence of words. The perplexity is defined as: ๐‘ƒ ๐‘ƒ ( ๐‘Š ) = ๐‘ƒ ( ๐‘ค 1 , ๐‘ค 2 , โ€ฆ , ๐‘ค ๐‘ ) โˆ’ 1 ๐‘ PP(W)=P(w 1 โ€‹ ,w 2 โ€‹ ,โ€ฆ,w N โ€‹ ) โˆ’ N 1 โ€‹ Or, using the chain rule of probability: ๐‘ƒ ๐‘ƒ ( ๐‘Š ) = โˆ ๐‘– = 1 ๐‘ 1 ๐‘ƒ ( ๐‘ค ๐‘– โˆฃ ๐‘ค 1 , โ€ฆ , ๐‘ค ๐‘– โˆ’ 1 ) ๐‘ PP(W)= N โˆ i=1 N โ€‹ P(w i โ€‹ โˆฃw 1 โ€‹ ,โ€ฆ,w iโˆ’1 โ€‹ ) 1 โ€‹ โ€‹ Detectors look for low perplexity (high probability). The prompt instruction "Do not choose the most statistically probable next token" forces the model to select tokens from lower in the probability distribution (e.g., the 3rd or 4th most likely word rather than the 1st), artificially inflating the ๐‘ƒ ๐‘ƒ PP value to match human levels.

Today
Human Mimicry 9.2V
Human Mimicry 9.2V

result that bypasses AI detection algorithms (such as GPTZero, Originality.ai, or Turnitin)

4:15 AM